Hacker News new | past | comments | ask | show | jobs | submit login
Go is my hammer, and everything is a nail (maragu.dev)
479 points by markusw 3 months ago | hide | past | favorite | 798 comments



People always under-estimate the cost of properly learning a language. At any given time I tend to have a "main go-to language". I typically spend 2-4 years getting to the point where I can say I "know" a language. Then I try to stick to it long enough for the investment to pay off. Usually 8-10 years.

A surprising number of people think this is a very long time. It isn't. This is typically the time it takes to understand enough of the language, the compiler, the runtime, the standard library, and idiomatic ways to do things. It is the time it takes to where you can start to meaningfully contribute to evolving how the language is used and meaningfully coach architects, programmers and system designers. It is also what you need to absorb novices into the organization and train them fast.


Here is the thing: many software engineers don't need to learn a language "properly."

When you start a new job, there will almost always be an existing code base, and you'll have to make contributions to it. Pattern recognition will get you a long way before you need to dive deep into the language internals.


You are mixing two things. Learning a language and learning a codebase. Those are not the same thing. Besides, you are also just talking about what happens during the first N months. Most companies I've worked for allow for time to learn the language(s) being used. Some of them will fire you if you can't show sufficient progress over time.


This is true, and it makes the people who do actually know the language in depth extra valuable.


Not so much. I know quite a few languages very well, more than the average programmer, and more often than not a senior programmer overrides your recommendation and don’t know the language as well as you or some other red tape. What has never happened is getting an increase in pay for knowing a language well.

So I share the sentiment that learning a language well is valuable. It’s not, and with the newer AI tools coming out there will soon be no reason to learn any language in depth.


I think you have that exactly backwards. LLM-based tools do best on shallow-knowledge tasks. It’s depth where they struggle. Show me the syntax for constructing a lazy sequence in your language? LLM. Reason about its performance characteristics and interactions with other language features? Human, for the foreseeable future.


Only for situations where that knowledge is required.


> It is the time it takes to where you can start to meaningfully contribute to evolving how the language is used and meaningfully coach architects, programmers and system designers. It is also what you need to absorb novices into the organization and train them fast.

I think these criteria in particular are much more than a lot of people mean when they say "learn a language", which would explain why their estimates are lower than yours. You're talking about expertise, whereas plenty of people are speaking of basic competency. This seems more like (natural) language semantics about what it means to "know" and the definition of "properly learn" than about the accuracy of people's estimates.


2012: Python is Awesome!

2014: Python is a great language, but there are a few pitfalls

2016: Python is a good language with the right IDE, tooling, and process. The people are pretty cool though.

2018: I like python, but I wish more people used type annotations.

2020: You know, metaclasses are freaking awesome! They saved me so much work!

2022: Why can't people code the most obvious solution in python?

2024: Celery! Jesus H. Christ! What were you thinking?!


2012: Python is Awesome!

2014: Python is a pain in the butt to manage packages and dependencies, how the hell am I gonna deploy this?

It's still a mess 10 years later unless you live deep in the ecosystem and know what third-party solutions du jour to manage that complexity. At least we have Docker now.

Also let's not forget the Python 3 migration fiasco that lasted ~2008-2018 and I still find myself porting libraries to this day.

I haven't used Python in a personal project in 10 years because of these painful paper cuts .


At least for Python dependency management, I'd hardly call it a mess these days and yes it was a horrible 10 years ago. The de-facto standard of venv and pip (using a requirements.txt file) is generally painless these days. To the extent that moving between MacOS, Linux and Windows is feasible for most (of my) Python work.

And the Python 2/3 pain stopped being an issue a few years back. I haven't been forced because of library requirement to use Python 2 in years.

Your criticisms WERE valid 5 or 10 years ago. In my experience, the ecosystem has improved greatly since then. I wouldn't use Python for large and shared developer code base but for one-man experiments and explorations, Python is pretty compelling these days in my opinion.


> The de-facto standard of venv and pip (using a requirements.txt file) is generally painless these days.

Except Python themselves now recommend the third party, pipenv. [0] Which means that there are multiple authoritative voices saying different things, and that produces confusion.

[0] https://packaging.python.org/en/latest/tutorials/managing-de...


Now being from 2017 ;) I suspect no-one has touched that file properly since then because of old scars :(

Part of the issue is there's not a lot of differentiation between most of the "project management" tools (apart from their age, which in some cases explains certain design choices), and the coupling of most of these to their PEP 517 backend doesn't help. There are backends where there is differentiation, such scikit-build, meson-python/mesonpy and enscons for wiring into pre-existing general purpose build systems (cmake, meson and scons respectively), maturin for rust, and flit-core as a bootstrapper, but given setuptools can be used without a setup.py now, I'm not clear on what some of the others provide.


That's why I deliberately described the pip/venv solution as a "de-facto" standard.

Is Python perfect in this regard? Absolutely not but the situation is FAR less painful than it was 10 years ago. The original comment claimed nothing has changed in 10 years. I used Python 10 years ago and disliked it intensely but I've recently (in the last 3 years) used it again and it is simply not true that nothing has improved in 10 years.


Poetry is a reasonable solution and has been for a few years now. We use it in production for deployments, and it's worked well for us.


It still won't beat the deployment speed of

   scp executable user@host:direc/tory/
that you get with Go.

I still use python for stuff that never leaves my computer, but in most cases if I know I need to run it on Someone Else's Machine (or even a server of mine) I'll reach for Go instead.


At the expense of absolutely massive, gargantuan, entire-OS-sized binaries.

Just looking at my bin/ folder eg

    k9salpha - a relatively simple curses app - 56MB
    argocd - a cli for magaging argocd - 155MB


It's like what, $0.01 per GB these days? I bought a 2 TB NVMe for Linux 4 years ago, on which I also game, and it's 50% full.

Disk space hasn't been a serious concern in two decades.


Until you start paying for it on cloud deployments per GB transfer.


        strip --strip-all k9salpha argocd


I agree with you, though if you live in a Linux+podman world, you can literally scp whole containers, so at least we made some progress.


Sure but that's not going to work across platforms, right? So not really an apples to apples comparison.


Even the most ardent of Go-haters have to be fair and admit it makes cross-compilation really nice.

    GOOS=linux GOARCH=amd64 go build && scp executable user@host:direc/tory/
Out of the box, no toolchains to manage.


This is pretty much what I do.

I have a Taskfile for local compilation and running (macOS) and when I need to release I can just say `task publish` and it'll cross-compile a Linux binary and copy it to my server.


Sure, it's nice in that regard.

But honestly running poetry install for python is not bad either.


Except in 3 years when poetry is bad for some reason and now there's a new better(?) tool again :D


Like the sibling comment said, cross-compiling is literally setting two environment variables.


Eh, poetry, pipenv, conda, setup.py, GCC and all dev tools to possibly compile some C shite you didn't know you needed? It's a complete shit show still...


+1


I started using Dagster a few days ago, which runs on Python.

As far as I can tell, I have to install the packages with pip. Wait - don't forget venv first. Oh but I'm not supposed to commit the venv, so how is anybody to know what I did? Okay I have to setup a requirements.txt... but now what did I install in the venv? Okay there's a way to dump the venv.. but it's giving me all the dependencies, not the root package I installed?!

Okay there's something called pipenv that seems smart... Oh wait no this Dagster has a pyproject.toml, maybe I should use poetry? Wait poetry doesn't work, it seems like I'm supposed to use setup.py... so what's the point of pyproject.toml.

Okay I put the package into the setup.py. Great now I have to figure out the correct version dependencies by hand?! I don't get it, python is a nightmare.

Compare to .NET: dotnet add package

On build: dotnet run # automatically restores packages to a project level scope


Setting up a requirements.txt file and venv is _not_ a difficult ask. Seems like a skills issue here frankly.


Your attitude towards a new developer in the community is disappointing.

I just wrote in detail why I found it difficult/unintuitive and you dismissed everything without addressing it.


If that's too difficult then you might as well not be in the profession, because it's one of the simplest things in the world.


I've always wanted to like Python, but I got out of undergrad in 2006 and pretty much the entire first part of my career was the ridiculous Python 3 migration nonsense. Every time I wanted or needed to use Python there was some dependency management nightmare to deal with. For like a decade or so.

I think I'm like you, I have some sort of Python ptsd from those years and don't really care to venture into it again. It sounds like it's gotten a lot better recently, but also after years of using Go and TypeScript I can't imagine working on a codebase without static types.


We use poetry for dependency management in production, and it's fine.


Hah, even at Google, the Python 3 migration lasted well into 2020.


I remember a large bank building a major new project in 2017 with python 2.7.


The only answer to is to use rye and I mean it. pyenv, pip, venv, pipenv, poetry, it's all crap.

(yes rye uses venv, etc under the hood, I know)


> At least we have Docker now

I'm not so sure this is a desirable state to be in long-term.

While Docker (and other container solutions) does provide solutions to some of the challenges around environment consistency, build artefact management, development environments with linked services, etc., it's also one of numerous de-facto tools that in some ways ends up putting more effort on the end-user to adopt. Less effort than installing all dependencies yourself? Absolutely. More effort (and for no real reward) than a single binary built natively for the target platform? Yep.

This feels like a sort of 'shift right' approach to releasing an application in terms of developer effort. As a developer you can just bundle everything into a container image and now the end user needs Docker and the install is going to leave a mess of image layers on their system. You don't need all the dependencies installed system-wide, that's true, but why can't they just be bundled with the application?

It's also not a great cross-platform tool, as Docker on anything except Linux just hides the requisite virtualisation. It's mostly transparent, true, but is still plagued by platform-specific issues such as write performance on mounted volumes on macOS being ~10x as slow (there's a fairly old thread on the Docker forums that's still open with this as a reported issue, and it does't look like it's going away).

Additionally, while disk storage _is_ (comparably) cheap nowadays, the buildup of image layers that contain endless copies of binary dependencies doesn't feel great. I don't want or need hundreds of copies of libc et al. all with slightly different versions sitting around. Even the 'slim' Debian images are min. 120MB, let alone the 1GB full ones. As Python is on topic for this thread the Alpine images are smaller, but then Alpine image builds for Python don't work with the standard PyPI wheels (I'm not actually sure if this is still true, need to check).

As languages like Python are just source releases, some tooling is definitely needed to make that more palatable in terms of handling releases of Python tools (just big collections of scripts), but being able to build and copy a single binary for a Go application feels great having spent a lot of time practically requiring Docker for release Python tools.

And all of this is really just getting us back to where we were years ago, where it was actually feasible to release a native binary for a target platform. The difference is that Go's tooling around related areas such as dependency management and cross-platform compilation is more standardised, and as a result the developer experience for this is much nicer than with other comparable languages that also compile to native code.

Arguably a lot of the development decisions around how to manage and release tools comes down to how good the experience is for developers. Docker ends up being favourable for Python because there's no 'standard' Python way to bundle applications with their dependencies and Docker basically solves this problem, even if in a messy way.

Does Docker have its place? Definitely. Do I hope it sticks around in favour of improved cross-platform developer tooling? Absolutely not.


chore(omfg): fix print to print()


I have a feeling that you've never dealt with sequences of bytes (encoded strings) in Python 2.


Fix print to logger.info() why not?


Dependencies is a sign of weakness.


As opposed to writing everything from scratch? Python has a huge standard library. Making it bigger to avoid dependencies would add more bloat to the distribution.


and automatic internet-supplied dependencies are a sign of supplychain vulnerability


Ah yes, all applications should be a single very large source file that issues OS syscalls directly.


I have a nice way of testing a language - I download a couple of projects written by beginner / mid-level developers and not having commercial dependencies and see how much effort it takes to get them running.

Python is one of the worst performers (and Java is shockingly bad too, although a lot of that is down to the way the JVM/language have been mismanaged). At least unless you're comparing it to very low-level languages (VDL, C).


> and Java is shockingly bad too, although a lot of that is down to the way the JVM/language have been mismanaged

This is interesting and not what I would expect. Generally to get a java app running from github you'd install the correct JDK and that should be about it. Many projects will use maven, which will know how to obtain dependencies. Could you describe a typical issue?


Dependencies requiring you edit some random XML file somewhere on your machine to get them working.

That and the whole JVM fragmentation and not being able to touch Oracle because it's Oracle and they'll use it as an excuse to sue you (at least that's the impression I had, the entire lawsuits thing is why I left Java back in the day).

You can say the same about other package managers, but it's not very often that I have to add another source to apt unless I'm running in a locked-down environment.


> Dependencies requiring you edit some random XML file somewhere on your machine to get them working.

No it doesn't. That's only if you're dealing with private maven repos and authentication etc. You don't need to do that at all for the use case in this thread, running a project from github.


Like the C and C++ compiler fragmentation, each with their own flavour of ISO support, UB and language extensions?

Or the Go compiler fragmentation, where only the reference compiler supports everything, tinygo a subset, and gccgo left to die in Go pre-generics era.


Editing your dependencies in maven is not some "random XML file somewhere", come on. It's a standard file that is situated like any build declaration.


Does Maven come bundled with the SDK?


No but you can just use ./mvnw instead of installing it. I have never had to install gradle as a package on my computer in my entire lifetime. I always used ./gradlew.


It's an additional install, like the jdk. Typically on a brand new os install the equivalent of these steps would be done:

    sudo apt-get install <jdk>  
    sudo apt-get install maven
And then you'll be good to go.


I develop on a Windows machine so you're left working out which version of which JDK you need to install.

That isn't really a problem with other languages.


My usual test for choosing a new programming language, is to see how easy or difficult it is to read documentation without having to open a web browser.

For languages/tools/frameworks/whatever that I'm already using, the main way I compare between them is looking at the ratio of "problems I was able to solve from their official documentation" vs "problems where I needed to use random blog posts (or worse, Stack Overflow) to find the solution".

Makes it a lot easier for me to choose tech for my hobby projects, where I sometimes don't even touch projects for several years because stuff just runs smoothly.


In which way do you believe the JVM/language has been mismanaged?


Oracle requiring commercial licensing for their JVM has made it so radioactive that my workplace firewalls "oracle.com" to prevent anyone from accidentally using it.


Does anybody actually use OracleJDK? OpenJDK has been the de-facto standard for years.


What's so bad about Celery?


The docs are surprisingly unspecific for such a mature project, enough that you often need to read source code to confirm your suspicions one way or another. The style has been redone a few different times now, and most of the old stuff is there for backwards compatibility so it's unclear what The Correct Way is to use it (once you step outside the obvious classes and methods).

The source code is like an Escher drawing, I'm sure it makes sense to the maintainer but its logic is incredibly abstract and scattered. From my notes on it ~1 year ago: "I took a dive into Celery source to better understand their native way for inspecting workers... holy shit. Celery is an overgrown jungle of abstractions."

It's honestly kind of surprising to me that Celery is still the undisputed #1 message queue abstraction library. I guess it's a simple enough problem that companies using Python at scale roll their own. I wonder what Instagram uses instead of Celery.

I found this [1] to be a useful comparison between queues in Python, if anybody has found a similarly-terse and exhaustive one that's more up to date for 2024 I'd love to see it.

[1]: https://dramatiq.io/motivation.html


dramatiq seems like a true gem. I didn't know it existed. I don't see why this can't just be an immediate Celery replacement for your typical Django project that already uses Celery - with a bit of time spent on refactoring into dramatiq.


Its powerful but good luck reading the source. Its a bit of a tangled over-engineered mess at this point and the reason there are a number of "newer" libraries to try and replace it. I could not recall specifics since it has been maybe a decade since I last used it but it works until it does not and then its incredibly hard to debug.

Its been a decade and maybe its gotten better but that sentiment is why a number of people do not want to use celery. I would also add that it was created in a different mindset of delayed job and I believe there are better patterns these days without as much complexity.


curious as well!


Go look at the source code. Everything is abstracted to the nth degree.

If you have a bug, good luck figuring it out!


hah my journey was -- man im 100s of lines in what is the type of this thing that I am passing? ... man i really wish i had types or at least hints

and then we got type hints but then I think more of my work will be moving to go anyway

why is packaging still broken?


I only dabbled with Python occasionally in college, and the lack of types was my biggest obstacle. Even reading the official documentation of libraries, I still couldn't figure out what I should expect from whatever function I was calling.

I recall the Selenium library I was using as a particularly gnarly offender -- pretty much every function I called, I had to debug or `print(typeof(x))` to figure out what I'd gotten back.


yeah ... i've become far more disenchanted with loosely typed languages. It's amazing to be able to quickly script things together and focus on the happy path in one-off scripts and bash-replacements but ... in a huge system when reasoning about what's going on -- I find it far more rewarding to know exactly what the type of a thing is and what it can do, etc., etc., and here with Python I rely on the editor a ton and I don't really like that.


And then you have the ever changing ecosystem, that can take years so sort out on it's own and must be constantly studied.

E.G: if you arrive in Python right now, numpy is in version 2 and polars is stable. uv is all the rage, pydantic gained so much perf it's not even funny, toml is part of the stdlib and textual is looking very good. Type hints are much better than 2 years ago, htmx is used a lot in the web department, fasthtml is having a moment in the light and pyscript got a platform. Is any of those information worth acting on? Should you ignore them all for the sake of productivity? Should you carefully evaluate each proposition?

Now if you have to do that with rust and erlang as well, it's a lot of time you could spend on coding instead.


Instead of constantly trying to keep up with everything in an ecosystem (which is simply impossible), just keep up with the stuff you are using to get your product working. Keep a side eye on the rest of the ecosystem to see if something neat is developing that you might use, but no need to study every hyped package coming out. In the end they are all tools to get something that works.

We still use flask and it's been great all the way, and it still holds up really well, there's no reason to move to any of the other packages just because that's the next new shiny thing.


There is a middle ground.

Not getting ruff, uv, django-ninja or diskcache would be just plain waste of productivity and ease of life for me.


I would think the best option is to always be suspicious of hype, unless you understand why something is being hyped? I'd also argue its worth understanding where your stack falls down, so you know when you need to look for alternative stacks.

The other bit is it's worth understanding how your stack interacts with related stacks (to use your examples, does uv and pydantic using rust vs. being pure python cause issues), but your OS is also changing, are your tools still going to work?


First, you don't have to follow hype, but listening to it is how you know where your ecosystem is heading.

Second, there is way more to an ecosystem that hype. Api, compat, deprecation, new features...

If you never created an web api and have to do it now, you still have to make a choice, hype or not. This choice will have very different consequences if you know only about flask and drf, if you heard of fastapi, or if you get told about django-ninja.

It's the same for every single task.

And even without new things, just learning the old ones is work. In python you can encounter -o ir PYTHONBREAKPOINT, be put on a project with one of 10 packaging systems, have to deal with pth files, etc.

And it's the same for all langages. It's a lot of work.


Sure, languages have depth, and there is at some level some need to keep up with the Joneses (though to mix metaphors, different ecosystems have their red queens running at very different paces, and is it wise to be in or try to join the fastest paced race?), but I feel there is a distinction between following the latest hype blindly (and using the latest tool because of hype), and evaluating tools based on some criteria (and choosing the latest tool because it actually solves the problem on hand best). The latter is a skill that can be gained and taught in one ecosystem, and applied in another (and the former causes issues far too often).

I think this skill is very similar to what highly productive academics have, they evaluate and examine ideas, techniques and concepts from a wide variety of fields, and combine them to build new ones.


Python has _remarkably little_ ecosystem churn compared to a lot of languages. New packages come and go all the time but the older packages are very well maintained and nobody's forcing you to switch to the new ones.


Depends on your bubble. AI circuls have huge churn.

And I'm an advocate of not chasing new shiny toy, but it's also a problem when after so many years people don't know -m exist or stick to setup.py.

In fact, I'm willing to bet half of HN can code in Python, yet can't tell the difference between setup.py, setup.cfg and pyproject.toml.

That's part of the things the ecosystem dumps on you.


I'm sorry to say if you can't google the difference between setup.py and pyproject.toml then it's a skills issues, because in 14 years of using Python that has never been an impediment to getting anything done in a timely manner.


IMHO, that's not part of learning the language. You don't really even need half of the packages you mentioned unless you don't know the language and, instead, rely upon third-party crutches.


Depends on what your goal is. If your primary purpose for learning python is to be able to use numpy and pandas, then you should probably start learning them from day 1. Learning how to inefficiently implement parts of pandas in pure python on an ad-hoc basis is a waste of time for most people compared to just spending that time learning how to efficiently use pandas.


Agreed. Having a passing knowledge of many languages is of course easy (even being moderately productive with them, esp. with Copilot now), but knowing the ins and outs of a language and its ecosystem is a very nontrivial time investment.

Thus, I'm mostly an anti-fan of the idea of polyglot environments.


Naturally one can't know all the languages of the world.

Even so, I have done consulting since 1999, with a small 2 years pause in research, and have several languages that I keep switching, every couple of months when project reassignment takes place.

The usual T-shape kind of consultant, know some stuff really well, and others good enough to deliver.


It’s not just the time to understand those nuances, but to keep up with the pace of changes. There’s an aspect of navigating those in some languages (JS…) that can be time consuming depending on where it’s deployed.


I think JS is a degenerate case. The reason I stay as far away from JS/TS as possible is that everything seems sloppy and therefore people try to "fix" things by replacing them every N months.

Indeed, when you see so many commenters advocating not bothering to learn a language deeply, you kind of get a taste for how prevalent the sloppiness mindset is. JS seems to attract a very high proportion of unskilled programmers.

I would struggle to name a single project I have been involved in where the JS/TS based front-end hasn't been reimplemented multiple times, from what I can tell, because people get lost.


Not only is it the nuances and pace of changes, but theres also a temporal element of seeing cause and effect over time from within an ecosystem. Is there a way to just parachute in and learn how to judiciously use classes in JavaScript the way one would if they were out there assigning Thing.prototype.method = function () {}? Can Udemy give you the same a-ha moment learning Promises today that someone got when they were drowning in callback pyramids in their Express v3 app?


I totally agree with you. Only after min 2 years of fully coding a viable production solution can one learn most of the pitfalls of given language. I would add frameworks which give another level of complexity and sometimes a framework is like a separate language.


I believe people over-estimate the benefit of properly learning a language.

3-12 months seems to be sufficient for the specialisation benefits to outweigh the small hit to language competency.

My strategy is to be a generalist superficially learning specialized languages.


Whatever time frames you apply, the only way to really learn a language is to use it. Reading a book, blog articles, and watching videos on YouTube won't give you the hands on experience necessary to internalize how everything works.


Of course. My methodology is usually this:

Day 1: read the introduction of a book, set up editor and environment

Day 2: (superficially) learn flow control and type system

Day 3: practice with coding problems on codewars

Day 4-7: reimplement one of my projects

Week 2: read the second quarter of the book and continue with puzzles

Week 3+: Just projects, maybe invest a day into tooling somewhere in between

That is assuming 1-3 hour days


> read the second quarter of the book

And then never get to the third and fourth quarters because the practise you get from actually typing stuff is far more valuable? :)


Exactly. There usually is some stuff about the deeper, darker corners of a language in the type of book I pick.

I usually pick the authoritative book / advanced book. Think "Programming PHP", "The joy of clojure", "The C Programming Language", etc. I don't necessarily need to know all about performance optimization, the async model or non-html templating.

Reading a technical book cover to cover feels wasteful imo


True, but actually reading a book about the language to understand the philosophy behind it and its trade offs speeds up the process enormously.


The way to become good at something is to do it for a long time and to be willing to think about what you are doing and continually improve.

There's much more to learning a language properly than just learning the language spec itself. Especially if we're talking about languages such as Java, where you have to know quite a bit about how the runtime works in order to be able to make good design decisions. And Go isn't really that simple either when we start to get into the weeds of performant concurrent systems that also need to be robust. It requires quite a bit of work to be able to make full use of what Go offers. It may look simple, but concurrency never is.

You're essentially arguing in favor of not being good at what you do, or at best being mediocre at lots of things. And I'm sure there are lots of jobs where that is okay. The thing is: why would you want to spend a lifetime being mediocre, to work on mediocre projects with other people who are mediocre?


One of the most important qualities of Go is that it is actually possible to fully understand the language -- in the sense that you never see a snippet of Go code and think "wtf, you can do that?" or "hang on, why does that work?"

Getting to that point takes many years, to be sure. But the language is simple enough, and changes slowly enough, that it is not an unrealistic goal -- the way it would be in C++, or Rust, or just about any other mainstream language.


Why is that important?

Let's say one language has me find a weird rabbit hole every 50 hours of use, and I spend half an hour learning about it.

Let's say another language has no rabbit holes, but I'm 5% slower at coding in it.

Why would I not prefer the first language?

(And 5% is supposed to be an intentional lowball. I'm confident I can find language pairs where my productivity differs by significantly more.)


That's an odd assumption. That you either have to choose between rabbit holes or development speed. Usually languages with lots of rabbit holes tend to require a lot more work.

Take C++ vs Go for instance. C++ is an endless series of deep, complex rabbit holes where you get enough rope to hang not only yourself but your entire team. Serious C++ projects tend to require a style guide for either the company (most companies don't have that kind of discipline) or at least some consensus of what subset of the language to use and how to structure problems for a particular project. Look at the guides Google use or the Joint Strike Fighter. They're massive. C++ is not a particularly fast language to develop quality code in unless you have a small team and/or you are supremely well aligned and disciplined. And even when you work on a disciplined team, it takes time.

Go has fewer rabbit holes. It gives you an easier path to concurrency. It has managed memory. It is sufficiently performant for a very wide range of projects. It has a very capable and practically oriented standard library that even includes HTTP, cryptography, support for some common formats and a decent networking abstraction that actually allows you design flexibility. And importantly: it comes with a shared code style which is tooling enforced, so you don't have to waste time on mediating between various opinionated hothead programmers on your project.


Because not all code is yours. In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs. A `slower` but predictable language can lead to more consistent, maintainable code, which is often more valuable in the long run or last but not least, running in production.


> In a team, the time spent on “rabbit holes” adds up, increasing the risk of bugs.

It adds up with the number of people on the team? Or the number of people on the team squared? Cubed? nlogn? Because a lot of those options would still favor the former language.

And if it's happening particularly often, that means the rate will fall off drastically as mastery is achieved.

I see a risk when code does something different from expectations. I don't see any risk when code has some kind of novel syntax that requires looking it up. Or when you learn about a feature from the documentation or a blog post.

Being predictable is quite valuable, but predictability is different from memorizing every feature.


This is similar to the experience that I had with Erlang. After having spent time with it, I was hardly every surprised looking at any code and my brain could deal with the actual problems at hand without having to figure out how to apply what I knew about the language.


Personally my biggest issue with reading go code has come from the //go: magic comments that I never seem to fully understand.

For example, what the heck is a //go:cgo_import_dynamic? As far as I can tell this is only documented on GitHub issues and mailing list comments.



Hmm, where's the rendered html doc for this?

I don't see it on https://pkg.go.dev/cmd/cgo


I've been doing Go for a long time now and I still occasionally come across snippets of code that I don't understand or understand the purpose of. So I think "never" is too strong a word. I'd say "rarely" rather than "never".


> in the sense that you never see a snippet of Go code and think "wtf, you can do that?" or "hang on, why does that work?"

I am extremely doubtful that this is true and would like evidence.


kind of hard for someone to prove a negative - feel free to find some Go code that meets that definition.


I feel this way... very often.

* Init functions

* Top-level variables being shared between all files in a package

* For-loop sharing (fixed in 1.22 [0])

* ldflags (this is more of build behavior, but it took me a while to figure out how some variables were being set [1]. Note: Go does embed some data by default, but an app I was working on introduced more metadata)

* Go build directives prevent IDEs/linters from analyzing a file at all. E.g. if you edit a linux-only file on macOS, you get _zero_ help.

I dunno, there are a lot of other weird, confusing things that Go does. It is less than most other languages, though.

[0] https://go.dev/wiki/LoopvarExperiment

[1] https://www.digitalocean.com/community/tutorials/using-ldfla...


Needing to cast nil to a type was weird. C/C++/Java/Python don't do that.

They also take on the weird Java-ish culture that everything is better if it's written in Go.

They also had this weird take on the C++ vtable, so they eschewed anything class like to keep everything statically linked for speed.

It also took them a long time to figure out their loop variable syntax was broken. So ... hrm.


C++ and Java do make you cast null/nullptr, if you need to resolve an ambiguous overload based on pointer type.


First three are clearly documented in language reference, which could be read in several hours (at least before generics were introduced).


You're misunderstanding what I'm saying. How the language works is relatively simple. Understanding what's going on when problems arise isn't.

* "how is my Cobra command being registered when I'm not calling any code?" -> "oh, there's an init function that registers the command when imported"

* "why is this variable not working?" -> "oh, it's that for loop thing again"

* "why does the result of my unit tests depend on the order of test execution?" -> "oh, we have 150 Cobra commands in one package with variables shared between _all_ of those commands"

* "how is this variable being set?" -> "oh, we have to call go build with a monstrous ldflags argument

* "why is CI not passing for this oneliner when this looked fine locally?" -> "oh, it's a linux-only file so the IDE isn't even showing me simple syntax errors"


I like to read language references (not just Go's, for example I have also fully read the ECMAScript standard as a part of my research in the past) and yet I will never say I can remember everything from those references. "Documented" isn't same as "easy to remember or recall".


I looked around for something like CppQuiz or its Rust analogue, where you're presented puzzling "Why would anybody do that?" fragments and asked what happens -- something that's often extremely difficult. If I had found one of those for Golang I could maybe see whether there are people who just ace the quiz first time.

But what I found were the sort of softball "Is Go Awesome, Brilliant, Cool or all of the above?" type questions, or maybe some easy syntax that you'd see in day 1. So I have no evidence whether in fact many, some or even a few Go programmers have this deep comprehensive understanding.


It's known for being a small and simple language.


Just like Brainfuck.


This is actually how I feel about Ruby. Not full on Rails because there is a lot more to learn there which comes with the time investment (which is also worth it in many cases).

But for most scripting work especially, just raw Ruby and maybe the Sequel library for a database just makes my productivity soar.


Do you have any open source examples (either your own or others) of simple Ruby + Sequel that are particularly elegant + productive that you could point to? Would love to see this.

I agree Ruby is the most elegant and enjoyable language to develop in.


Agreed, there's such a wide knowledge gap between "I read about this language design choice online" and "I know this from first principles". Truly, thoroughly understanding the languages and tools you work with every day unlocks a deeper level of understanding that you can't substitute with anything else. It's what people often describe when they first learn Lisp, but the same actually holds for every language, tool, etc. There is tremendous value in depth. However, since there is considerable opportunity cost in choosing what you put your time and effort into, do be mindful of the tools you decide to acquire this amount of knowledge in.


It takes forever to "properly learn" a language in my opinion, because it would mean the total emersion into the ecosystem and most prominent libraries built around that language as well, and they will be always moving targets even though the language itself might be static. Go is no exception, and in fact there are only a handful of languages that can be ever "properly learned" if we are strict about that. The trick is not to properly learn, but to know how to learn enough of anything you need and make the best use out of that.


I completely disagree. I have been successful in building a new project for my company with myself and the entire team being completely new to Go when we started. Did we have growing pains, and some sub-par decisions? Absolutely. Do we right better Go code today, 4 years later? Absolutely. Did this mean that the project was in any way significantly slowed down by our use of Go compared to the C# or Java we knew before after the first 1-2 months? No, not really. The ways in which Go fit better for our chosen domain completely made up for that (and this all despite my continued belief that Go as a language is inferior to Java and C#).

Now, there are other languages where I'm sure that we would have had significant issues if we tried to adopt them. In particular, switching from the GC languages we were used to to a native memory language like C++ or Rust or C would have surely been a much bigger problem. The same would have probably been true if we had switched to a dynamic language.

But generally, deep language proficiency has less value in my experience than general programming proficiency, and much less than relevant domain knowledge. I would hire a complete Go novice that had programmed in Python in our specific domain over someone who has worked in Go for a decade but has only passing domain knowledge any day.


I also think it is a good idea to learn a few very different languages when you are starting out so that you have a sense of the tradeoffs and the different ways things can be done. When I was in college they taught C and Lisp (I am dating myself) which seemed very different to me at the time but also useful in different ways. Close to the hardware, vs supporting objects, lambda functions, etc. Later I learned a number of other languages and now I try hard to avoid switching languages because of the lost productivity.

Now if I were starting out I think I would try to focus on two different languages, but more modern ones. Maybe Rust and Python?


I don't think it is that long. Think of learning any spoken language. We expect that it takes AT LEAST ten years to be able to master it, and typically it takes much longer to meaningfully add to the compendium of literature. Usually we are talking about decades. It isn't just about the tool (speaking, writing) but being so fluent that you can successfully express yourself in a way that has impact.

If you can do it in ten years with a software stack, I'd say that's pretty impressive.


ChatGPT accelerates this timeline significantly. As a senior programmer you can largely skip the "junior" phase of the learning, almost like pair programming with a very proficient junior programmer.


I disagree. ChatGPT is like pair programming with someone who is on drugs and tends to lie a lot. Co-Pilot is even worse as it constantly bombards you with poor suggestions and interrupts your flow when you should be left alone to think about the problem you are solving.

What separates a senior developer from a junior is that the language is no longer a barrier so your spend most of your time thinking about the problem you are solving and how you balance requirements, take into account human elements, and try to think ahead without prematurely spending time on things you aren't going to need.

Being shown a bunch of possible solutions to choose from doesn't help you develop the ability to think about problems and come up with better solutions.

Yes, ChatGPT can occasionally help you when you need a nudge to pick a direction or you come across something you don't know anything about. But it is not a substitute for knowing stuff and it can't actually teach you how to think.


I am fully convinced that programming language generation out of stuff like ChatGPT is an intermediate step on their evolution, just like early compiler adopters wouldn't trust compilers that wouldn't show the Assembly output.

Eventually we will achieve another milestone in what it means to program computers, where tools like ChatGPT will directly produce executables, or perform actions, and only when asked it will produce something akin to source code for cross-checking.


That's true. I agree. Python is the only language I ever really learned to a deep level and properly, and it makes a huge difference! (And I'm still learning of course.)


Using esoteric lang constructs is bad idea IMO. Makes software harder to read/port/maintain. It’s useful for some hack & high perf stuff but not typical apps.


Agreed. But knowing a bit about how a language (and runtime) actually works can make a huge difference when you design stuff.

For instance, I've seen lots of examples of people implementing LRU caches in Java, and then scratching their heads when performance drops because they have no idea how the GC works. Or complicating things immensely because they realize memory is tricky and then ending up with some complex pooling scheme that constantly breaks rather than realizing that often more naive code that exploits the low cost of ultra short-lived local objects.

If you know what you are doing it becomes easier to choose solutions that are performant and understandable.


By now I know three major languages: Go, Python, and Typescript. I know tradeoffs at-a-glance, I deeply understand the syntax and its various forms, the full array of tooling and what they do, and lastly (but maybe most importantly) I can estimate more accurately because I can architect in my head.

I can work in a myriad of other languages. I may be able to do some of the things above in Java or Rust but not nearly to the degree to which I can in languages I know. I think the difference is I'm probably not going to be leading a Java project or producing anything really innovative at a code level.

To me, more important than picking a hammer, is knowing a variety of hammers that are good at certain tasks. I don't focus on Rust or Java as much because, frankly, I can build most things that are pertinent to the constraints of my work environment with those tools and most people I encounter also know them. The other considerable factor I have is that most things I work on can be horizontally scaled so my need for Rust is very niche. With respect to Java, I have a lot of workarounds that are cleanly abstracted enough before I need its dynamism and subsequent mental overhead.


IME, engineers and "the discourse" when we argue on the Internet often conflate "ease of use" and "ease of learning". When we say "ease of use", we usually mean "ease of learning."

If you're a hobbyist, yeah, you should heavily value "ease of learning." If you're a professional, the learning curve is worth it if the tool's every day leverage is very high once you're ramped up. Too many developers don't put those 3-4 months in, in part due to the over-emphasis on "ease of learning" in our discussions/evaluations of things.

I was a part of a very large go project (https://news.ycombinator.com/item?id=11282948) and go-based company infra generally some years ago, and go is emblematic of the classic tool that is amazing at ease of learning, and quite mediocre at "ease of use" as time goes on.

I personally end up resenting those tools because I feel tricked or condescended to. (This is a little silly, but emotions are silly.)

I'd wager this is also why Rust is a perennial "most loved" winner in the surveys: it gets better as your relationship with it deepens, and it keeps its promises. Developers highly value integrity over trickery, and hard-earned but deep value feels like integrity and wins in the long run. (other examples: VIM, *nix, git)


I used to work for a Go shop. We dealt with financial data. I found it so annoying that many of my colleagues would use Go for one-off tasks such as aggregating CSV files, updating the database with some data, or fetching data from the database, and then trying to make a plot. I saw my colleagues again and again implementing basic algorithms such as rolling median, or finding a maximum. Instead of loading data into Pandas and doing a group by, they would create some kind of loopy solution that would use maps.

I totally understand why some people prefer Go in production to Python, but I could never understand why people wouldn't just learn the standard data science tools instead of reinventing the wheel in Go, always debugging their own off-by-one errors. It was difficult for me to trust the results of such analyses, given that I knew how many of the basic functions were written on the fly and probably not even tested.

In the end, I didn't think it was a good use of the company's time. It felt more like an ego thing - thinking and showing that Go is sufficient. It reminds me of how people try to use iPads to code - only to show that they can do it.


I've mostly seen the opposite - pandas and jupyter notebooks shipped directly to production because the data scientists and AI guys didn't know how to do anything but python. As a result, the solutions were not performant and often had lots of runtime crashes due to python's more loose typing


If they're shipping notebooks to production and having so many crashes, I'd question that they even know how to do Python.


When do you ship notebooks to production? Jupiter was never meant for external clients.


While terrifying, it is not uncommon to see python notebooks make it to production.


Oh god why. I thought (and hoped) that GP didn't actually mean this.

I see how a team or an organization can eventually get to this point. It just saddens me that they got there.


why not, many data related tasks are rather ad-hoc, it's a waste of time to make a long lasting software out of every ad-hoc request


Seen the same with exposed matlab web apps. Not just the Python guys, kind of shocking how much group think exists on any platform.


Quite a number of AI edtech sites use notebooks in production for assignments, as an example of when it happens.


Usually what happens is:

1. "We got it all working in the notebook"

2. "Great, ship it"

3. (data scientist takes the notebook code almost verbatim, wraps it in a basic CLI or HTTP API and it gets shipped off in a docker container for other services to consume)


In practice that’s not too crazy. It’s fast easy to debug and works with most CI/CD tools.

If you ever want to work in teams that kind of setup works extremely well.


I'm not sure how you're going to fix this given the data science tools are in Python. Are you gonna implement a half-broken 20% of numpy/scipy for a one-off program and then try to port?

These libraries hide a lot of complexity and implementing even a few operators is a project.


i dont think they are complaining about the libraries, but rather the scratchpads and notebooks that people use for ideation and evaluation being moved directly into a production environment because the authors don't have the experience or time to build more structured, efficient and maintainable code.


Rust + Polars comes to mind.


I've had exactly the same experience, it's a nice language but using it for things it's not suited for like data exploration makes no sense to me.

Production data pipelines on the other hand, but only after testing them well and as you say, making sure there's good testing if you're implementing things like numerical routines.


Do you have experience implementing ETL pipelines in Go? I think it'd be a better fit for us over our current language, but I'm curious to hear from people who've actually done it.


Yes. It works fairly well. With that said, I've got a feeling that life would be a lot easier to change around if we weren't using it, we end up writing a lot of code to do relatively simple things.


I do this at my job.. Disclaimer: I’m a web dev (“architect”) who does some lightweight data engineering tasks to facilitate views in some of my apps.

My pipelines are very simple (no DAG-like dependencies across pipelines). I could just have separate scripts, but instead I have a monorepo of pipelines that implement an interface with Extract, Transform, Load methods. I run this as a single process that runs pipelines on a schedule and has an HTTP API for manually triggering pipelines.

At some point I felt guilty that I am doing something nobody else seems to do, and that I had rolled my own poor-man’s orchestrator. I played around with Dagster and it was pretty nice but I decided it was overkill for my needs (however I definitely think the actual data analysis team at my company should switch from Jenkins to Dagster heh…)

On a separate note, all of my pipelines Load into Elasticsearch, which I’m using as a data warehouse. I’ve realized this is another unconventional decision I’ve made, but it also seems to work well for my use-cases.


What is current language and have considered doing it in SQL?

I don't think go will be the right choice. It is just not its strength.


It depends on what you're doing right? The commenter here replied to me, and we're processing really large data files that are deliberately not in a SQL database due to size, only artefacts of these files eventually make it into a time series DB. For us Go works well and is performant without any great difficulty. For domain specific analytics we generally use Python, and Go just calls out to an API to do them.


You are right. And I am mostly a "T" guy so I guess the answer was mostly about the transform.

For extracting the data, go is probably a very good choice. But for transforming, pretty often not, although your use case may be suitable.

In the end, the question was very open ended.


> it's a nice language but using it for things it's not suited for like data exploration

Pedantic point, but this is an issue of library support, not the language.

For whatever reason, data scientists and ML researchers decided to write their libraries and tools in a Kindergartner's language with meaningful whitespace, dynamic typing, and various other juvenile PL features specifically aimed at five year olds and the mentally infirm.


Nobody would really use a compiled language for this, the compile-run-edit-cycle just takes too long. Prior to Python people really just used MATLAB and Mathematica for that sort of work in the physics/engineering side, and R and Stata/SPSS on the bio + maths side. MATLAB, Mathematica, Stata and SPSS are all commercial and R has exactly the same problems to Python in environment management and compiled binaries, if you use it today you end up doing a lot of manual compilation of dependencies and putting them in the PATH on Linux at least.

Python became popular because the key scientific ecosystem libraries copied the libraries from MATLAB closely which made it easy to pick up, and because it was free. Anaconda made a distribution that was easy to install with all the dependencies compiled for you, which worked on Linux/Mac/Windows which made it much easier to use than R. The other interactive languages around at the time were Ruby which was heavily web dev focused, and Perl. Node didn’t yet exist.

Once you have an ecosystem in a language it’s very hard to supplant. You need big resources to go against the grain. That no big company has decided to pour lots of money into alternatives even despite the problems probably tells us that it’s not viewed as being worth it.


> I used to work for a Go shop. We dealt with financial data.

People dealing with finance are THE MOST risk-averse people I know. No new tool will be used without years of vetting and it'll still be blamed if something goes wrong - even if the new tool never touched that bit of the process =)

Source: Consulted for a finance company and it was Ye Olde Java and COBOL all the way down =)


If they’re risk averse, they shouldn’t, as the OP claimed, implement basic algorithms such as rolling median, or finding a maximum instead of loading data into Pandas and doing a group by, would they?


Using Go for data analysis is not the risk averse choice. If the comment were "everyone was using excel and maybe R and I couldn't get them to use python", then the risk aversion comment would be spot on :)


>> People dealing with finance are THE MOST risk-averse people I know. No new tool will be used without years of vetting and it'll still be blamed if something goes wrong - even if the new tool never touched that bit of the process =)

What exactly do you mean by "finance" in your context? Even within a huge organization like Bank of America you can have an extremely conservative part and an innovative part. Obviously, most employees are likely working for the conservative part, but that doesn't mean that there aren't some people playing using modern technology. It just doesn't get evenly distributed, and you can be many layers from that usage in your department.


Depends what you mean with "dealing with finance". If you're talking about core back office 'plumbing' the yes. Other parts are far more adventurous. Poke around any huge financial institution and you'll probably find everything from COBOL to some experimental in-house variant of Haskell.


It sounds like every single data operation in this shop was done with a "new tool" written on the spot in Go.


People who are saying Finance == assembly/COBOL/Java either have never worked in the area or have decided about the elephant by touching a very small part it.

Within a bank, you can have the accounting department using tested and tried approaches and/or vendor products whereas the machine learning group may be using the latest tech. But finance is not all banks - you can have non-banking financial orgs, rating firms, investment consultancies, private equity, hedge funds and on and on. A hedge fund probably uses super cutting edge stuff in one team while the other has been chugging along with stored procedures and Java.

It is too big and varied to generalize.


I don't think this is true of every financial institution. Nubank runs almost entirely on Clojure except for some of its Data Science tooling which is in Scala. Sure it's JVM, but it doesn't exactly scream risk-averse to me.


My experience is different to that.

I'd say people and businesses that best understand and balance risks against potential rewards are the most successful. I know a couple of skydiving accountants...

Use of innovative tech in finance is one way for companies to gain an "edge" which can make them a lot of money. I wouldn't characterise people working in those particular areas as more risk averse.

OTOH finance businesses (esp. those that manage other peoples money) are regulated and can't put new tech into Production without careful change control. But this evolves over time, and controls were looser in the past.

There's a steady creep of regulation and control into tech used in finance outside and around IT: so called EUC (End-User Computing). This can be a source of "wrong tool for the job" syndrome. I've seen some hellish SQL written by non-IT people, but it got the job done. It's also an area where source code control, testing, and release processes are inevitably more human and error-prone.

There is a culture clash internal to large finance between "move fast and break things!" and "nothing can change, ever - too risky!" This leads to a mixture of the very old and very new, and inevitably, more complexity. Achieving homogeneity once the heterogeneous is out of the bottle is very hard - attempts to do so often (always?) lead to a variation on the xkcd "Standards" situation https://xkcd.com/927/


I'm surprised they were adventurous enough to use Java.


Someone picked Java at the start of the millennium-ish and they never dared to move away. =)


> I used to work for a Go shop. We dealt with financial data.

Hopefully you never forgot to initialize a numeric field and had it default to 0!


I think you are implying that this is incorrect behaviour. Could you explain why?

The alternative would be to have it be undefined, which seems clearly worse. Or do you mean that it should be a compiler warning/error instead?


I've worked in Go for two years, and I hate zero values with a passion. I would much prefer undefined/nil/whatever as a default value. At least that obviously represents an invalid value, and will crash on use, become `null` in the database and when serialized to JSON. `0`, however, is indistinguishable from "missing data", and will just sit there and slowly poison your production data over time.


I don’t think you mean that. By undefined, I was referring to C-style undefined behaviour stemming from reading uninitialised memory (which is the only alternative to not setting the memory to a known value when it is declared but not initialised). This is clearly worse than zero values because at least zero values won’t result in initialisation from memory which could be anything.

If “null” is better for your use case, that’s not too difficult to emulate in Go (although perhaps not as ergonomic as it is in other languages). What you want is effectively an “optional” type rather than the type itself. The easiest way to represent this in Go is to use pointers to the type, which have the zero-value of nil. Combining this with JSON encoding ‘omitempty’ would get you the properties you were looking for.

I don’t think the lack of default nullability is bad (look at all the languages that are trying to slowly phase it out by supporting non-nullable types).


> I don’t think you mean that. By undefined, I was referring to C-style undefined behaviour stemming from reading uninitialised memory

You're right, I was referring to Javascript-style `undefined`, where it's basically a 2nd `null` value.

> The easiest way to represent this in Go is to use pointers to the type, which have the zero-value of nil.

This certainly works, but it has some serious drawbacks imo:

* Performance overhead

* The integer field is no longer copied along with the struct. The resulting aliasing shenanigans can easily trip up experienced developers, because "why would anyone store an `*int` in a struct instead of a plain `int`?"

* Other silly mistakes that no linters catch, such as comparing the pointers when you meant to compare the raw values.


> Or do you mean that it should be a compiler warning/error instead?

Yes - it's pretty trivial to enforce that all fields be initialized.


> rolling median, or finding a maximum

The real question is why are you writing code to make a plot when you can load a csv in excel and get a clean plot in 2 minutes.


Maybe because they need to do that every day with variations and automating away the monotonous chore is one of the main points of writing code.


> I saw my colleagues again and again implementing basic algorithms such as rolling median, or finding a maximum.

Emphasis mine so I will go with no regarding your explanation. It’s obvious the discussion was about one shot plotting from the start. The whole Go vs Python discussion for graphing doesn’t make sense in the context of automation.


Because Excel sucks at plotting, especially if you have non-trivial amounts of data.


For that kind of processing, I prefer to use the database itself. I mean, it is a "data" base and it's pretty good at it. I get it-- sometimes you have a half cooked spreadsheet someone sends you and you need to run some analysis on it.. in that case I don't see any harm in using Go, using visidata, importing it in sqlite or using pandas or whatever.


I have done the same like your colleagues. Recently I discovered that pandas + jupyter notebook is much better tools. However as gopher that needs occasional data cruncher, I totally relying on LLM chat agent to do my work.

But the rest of the work is still go, so the context switching is there but bare able. Imagine a go crawler dumping csv and a python script crunching the csv.


I've almost entirely replaced Pandas with DuckDB in my day-to-day work. I wonder if it would be an easier lift for someone who is familiar with SQL, and doesnt want to pick up Pandas/Polars.


This comment might finally push me to try DuckDB, thanks. I’m quite proficient with scientific python in general, but Pandas API is just alien (and slow). SQL is more natural for its domain.


How's DuckDB better than using SQLite on an in-memory DB?


It's designed for analytics work. Some examples: * You could run a bunch of analytics queries directly on a jsonlines, csv, or parquet file (or even glob of files). * You can output directly to a pandas or polars dataframe * You can use numeric python functions inside of queries * You can also attach to S3 buckets, postgres databases, or sqlite databases and query them directly.


> I saw my colleagues again and again implementing basic algorithms such as rolling median, or finding a maximum. Instead of loading data into Pandas and doing a group by, they would create some kind of loopy solution that would use maps.

Isn’t this trivial? I use these functionalities and maintain my own Go package for such utilities. A software firm should be able to do this without much issue. In fact, it is better done in house. Don’t have to worry about dependency management and downloading the entire Pandas library for just rolling means, etc.


> Instead of loading data into Pandas and doing a group by

SQL can do `group by`. No need for overkill with "data science tools".


If you have a csv, pandas is the "less overkill" route than loading it into a database to use sql.


Each their own tool, but that's nothing awk can't do [1]. I prefer a shell script, because shell is everywhere.

[1] https://stackoverflow.com/a/75073649


Sorry, but no, awk and shell scripts are not a good choice for this kind of work. I'm sorry, but they just aren't!

Sure, go for it if you're just doing stuff for your own personal interest. But if you're doing serious data-driven work in a professional environment, this is going to be an awful choice.


While definitely doable, using a SQL database to do data discovery is obtuse.


It's obtuse, but it's effective for some people. I know someone who hires devs from the .NET and Python space for finance. When asked to suggest a solution to any kind of data problem like this the interviewees split down the middle with the .NET ones almost entirely using a relational database and the Python ones using either the library du jour or suggesting something on the command line (since they often have Linux experience.)

In both parts of their org, the .NET area tends to use SQL exclusively for analysis and the Python folks use Pandas and bunch of other stuff. These departments are also in significantly different parts of their organization with their own mandates and culture.


As someone with +20 years in finance (hedge funds/trading) and knows .Net, Java, Python, C++, shell, etc, the first question I'd ask is: where's the data?

If I'm being asked to do data analysis, it's because we need the answer yesterday. So, the tool I choose always going to be a matter of which gets me the answer fastest and with the least amount of friction. That's almost always dictated by where the data is _now_. Not where it'd ideally be.


It's in this email attachment right here. Now what?


To me that sounds like a csv file and it'll likely be a python script from me.


Depends, that is what OLAP is all about, naturally most of the good tools happen to be commercial for the enterprise space.


Not really. DuckDB can operate on raw csv without ingesting them. This is as easy if not easier than setting up python and pandas.


Totally agree that it depends on the tools available but given a typical toolset (sql and python). I would lean towards python because some types of analysis are easier to express in code, especially when working on top of the multiple data sets.


XTX? I know that they use a lot of Go.


You don’t need to spend time learning standard data science tools. These tasks are well-defined, thoroughly documented in different blogs, and with today’s AI-driven code generation, basic programming knowledge is sufficient. Attempting to manipulate CSV data and generate visualizations in Go would be a waste of time, delivering subpar results.


The author lists multiple reasons for this, but for me the biggest one is the first one: Go is good for almost everything.

I have extremely good productivity when using Go. Once your project exceeds 100 lines it is usually even better than python. And yes, I am aware that Rustians did a survey where Rust was crowned as the most efficient language but in my reality (which may differ from yours) Go is simply the best tool for most tasks.

Why is that? Well, for me there are 3 reasosns:

1. The language is extremely simple. If you know 20% of C you are already a Go expert.

2. The core libraries are extremly well thought.

3. Batteries are included. The toolchain and the core libraries alone can do 90% of what most people will ever need.

When people argue about the validity of these claims, I simply point them to this talk https://go.dev/talks/2012/concurrency.slide#42


Go is the only language I've ever felt highly productive working in. Oftentimes in other stacks I find myself in analysis paralysis on meta things that don't matter:

- what design patterns/language features make sense to use

- what is the best lib to accomplish X

- how do you keep things up to date

With Go, the language is so simple that it's pretty difficult to over engineer or write terse code. Everything you need is in stdlib. The tooling makes dependency management and upgrades trivial because of strong backwards compatibility.


Sure, if everything one does is either CLI stuff, or UNIX daemons, containers, ....

Because in the reign of graphics , GUI, GPGPU, HPC, HFT, ML, game engines,numeric analysis, ... there is hardly any library that really stands out.


Or servers, and about anything that really benefits from concurrency.

Even a lot of games could be made with go. The gc wouldn’t really kill the frame rate of a game unless you really push it.


That is what "UNIX daemons, containers means" on my comment.

Gamedev in Go only for those that rather spend their time doing engines from scratch.

Additionally Go's lack of support for dynamic linking is a no Go (pun intended) for big A game studios.


My Go is a little rusty by now, but I thought they supported some type of dynamic linking(although if I recall correctly it comes with a number of free footguns)


It does a very crude one, where one is bound to expose C ABI types, all shared objects have to be linked with the same runtime, and there are still issues making this rather basic support work on Windows, land of game developers.


Gamedev in Go will require cgo (and not just the easy parts), which ups the complexity quite a bit, unless you're already very familiar with C.

I think it's pretty viable nonetheless, but more for the experienced developer with specific goals outside of the nice parts of common engines, or for a hobbyist who knows the language and wants to tinker and learn.


Sorry, this comment is so incorrect that I have to ask, what are you basing it on?

You can create games today using Go without cgo, and there are numerous examples of shipped games of varying complexity and quality. I do this to ship the bgammon.org client to Windows, Linux and WebAssembly users, all compiled using a Linux system without any cgo.

https://ebitengine.org

https://github.com/sedyh/awesome-ebitengine#games


https://ebitengine.org/en/documents/install.html

For anything other than windows:

> Installing a C compiler

> A C compiler is required as Ebitengine uses not only Go but also C.

I mean, even on platforms without cgo, it's it working magically?

No; it's using https://github.com/ebitengine/purego, which is:

> A library for calling C functions from Go without Cgo.

Like... I mean.... okaaaay, it's not cgo, but it's basically cgo? ...but it's not cgo so you can say 'no cgo' on your banner page?

If you're calling c functions, it's not pure go.

If calls some C library, and it doesn't work on any other platform, its like 'pure go, single platform'.

hmm.

Seems kind of like... this is maybe not the right hammer for gamedev; or, perhaps, maybe not quite mature yet...

Certainly for someone in the 'solo dev pick your tools carefully' team, like the OP, I don't think this would be a good pick for people; even if they were deeply familiar with go.


Ebitengine author here.

PureGo is not fully used in Ebitengine for Linux, macOS, and so on yet. You still need Cgo for such environment.


It was based on my own experience (with e.g. sdl2) and, clearly, some ignorance.

I didn't mean to imply that cgo was an insurmountable barrier. But apparently it was a big enough deal for the authors of this engine that they copied over large parts of major API surface to Go to avoid it. Impressive.

However, AFAICT avoiding cgo means using unsafe tricks and trusting that struct layout will stay compatible. Nevertheless, it's a proven solution and as you say used by many already.


Note that Go has very different GC behavior to what .NET GC and likely Unreal GC do. At low allocation rates the pauseless-like behavior might be desirable, but it has poor scaling with allocation rate and cores and as the object graph and allocation patterns become more complex it will start pausing and throttling allocations, producing worse performance[0].

It also has weaker compiler that prevents or makes difficult efficient implementation of performance-sensitive code paths the way C# allows you to. It is unlikely game studios would be willing to compensate for that with custom Go ASM syntax.

Almost every game is also FFI heavy by virtue of actively interacting with user input and calling out to graphics API. Since the very beginning, .NET was designed for fast FFI and had runtime augmentations to make it work well with its type system and GC implementations. FFI call that does not require marshalling (which is the norm for rendering calls as you directly use C structs) in .NET costs ~0.5-2ns, sometimes as cheap as direct C call + branch. In GoGC it costs 50ns or more. This is a huge difference that will dominate a flamegraph for anything that takes, for example, 30ns to execute and is called in a loop.

It is also more difficult to do optimal threading with Go in FFI context as it has no mechanism to deal with runtime worker threads being blocked or spending a lot of time in FFId code - .NET threadpool has hill-climbing algorithm which scales active worker thread count (from 1 to hundreds) and blocked workers detection to make it a non-issue.

Important mention goes to .NET having rich ecosystem of media API bindings: https://github.com/dotnet/Silk.NET and https://github.com/terrafx cover practically everything (unless you are on macOS) you would ever need to call in a game or a game engine, and do so with great attention paid to making the bindings efficient and idiomatic.

For less intensive 2D games none of these are a dealbreaker. It will work, but unless the implementation details change and Go evolves in these areas, it will remain a language that is poorly suited for implementing games or game engines.

[0]: https://gist.github.com/neon-sunset/72e6aa57c6a4c5eb0e2711e1...


both unity and unreal have GC these days.


And dynamic linking, plugins....

Which Go doesn't do really well, and there is no interest in improving.


Yeah, but their GC are tuned for the specific task of being a game engine. (Idk the specifics, but I doubt they are stop-the-world GCs for example)


wails, raylib, ebiten


> it's pretty difficult to over engineer

I don't know about that. Every programmer's first Go program seems to like to go to channel city. Perhaps more accurately: Over-engineering your Go program is going to quickly lead to pain. It doesn't have the escape hatches that help you paper over bad design decisions like some other languages do.


Also: interfaceiritus. Someone saw "accept interfaces, return structs" somewhere and now EVERYTHING accepts an interface, whether or makes sense or not. Many (sometimes even all) of these interfaces have just one implementation.


Doing this allows you to mock out that implementation in unit tests.


A lot of times you want to be able to cmd+click on something and actually see what the hell the code actually does and not get dead-ended at an interface declaration.


What are you using that can cmd+click to take you to a definition, but can't also take you to an interface implementation? I develop Go in Emacs with the built-in eglot + gopls, and M-. takes me to the definition, C-M-. takes me to the implementation(s). It's a native feature of gopls. Sure, it's one extra button, but hardly impossible.


Sounds like a UI bug more than anything.

The compiler certainly knows how to determine if there is only one implementation of an interface and remove the interface indirection when so. There is nothing really stopping the cmd+click tooling from doing the same.


Does the compiler do that? That sounds extremely unlikely, especially because an interface with only one implementation can store the nil type tag or a tagged pointer to an instance of that implementation.


The nil interface is another implementation. I mean, unless it is being used as the sole implementation, but I think we can assume that isn't the implementation being talked about given that it isn't a practical implementation. We're talking about where there is one implementation.


Right. Can you cite anything that says that the go compiler does this sort of whole-program analysis to try to prove that a certain argument to a function is always non-nil, so that it can change the signature of that function and the types of variables declared in other functions?


Uh. No. Why would I ever waste my time proving something I said? If I'm right, I'm right. If I'm wrong, you'll be sure to tell me. No reason for my involvement.


If a nil is another implementation then interfaces with a single implementation don't exist.


Given the following, where is the nil implementation found?

    package main

    type FooInterface interface {
        Baz()
    }

    func bar(fizz FooInterface) {
        bizz.Baz()
    }

    type MyFoo struct{}

    func (*MyFoo) Baz() {}

    func main() {
        var foo FooInterface = &MyFoo{}
        bar(foo)
    }


Nil is built-in. You just have to write the code to instantiate it and the compiler gives you one. The coder does not need to create an implementation, it's there for free.

I would not have called it a "second implementation" myself, but that's your claim to defend, not mine.


map is also built-in. Where do you find the hash map in the given program?

By your logic some nebulous package in a random GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.


> map is also built-in. Where do you find the hash map in the given program?

If you told me a type can be optimized because the compiler knows it can only have non-hash-map uses, but I could put that type into a hash map with a single line, I think I would be right to be skeptical.

> By your logic some nebulous package in a GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.

I expect the compiler to have a list of implementations somewhere. I don't know if I can expect it to track if nil is ever used with an interface. I could believe the optimization exists with the right analysis setup but you called the idea of finding a citation a "waste of time" so that's not very convincing.


> but you called the idea of finding a citation a "waste of time" so that's not very convincing.

Not only a waste of time, but straight up illogical. If one wants to have a discussion with someone else, they can go to that someone else. There is no logical reason for me to be a pointless middleman even if time were infinite.

Now, as fun as that tangent was, where is the nil implementation and hash map found in the given program?


You can head over to godbolt.org and see for yourself that changing the value to nil doesn't change the implementation of `bar`, though it does cause `main` to gain a body rather than returning immediately.


The implementation is preexisting. Even if it was directly used, there would not be an implementation in the snippet. So it not being implemented in the snippet proves nothing.

And what do you mean "someone else"? You're the one that said the compiler "certainly knows" how to do that.


> So it not being implemented in the snippet proves nothing.

It doesn't prove anything, but is what we've been talking about. Indeed, there is nothing to prove. Never was. What is it with this weird obsession you have with being convinced by something? Nobody was ever trying to convince you of anything, nor would there be any reason to ever try to. That would be a pointless endeavour.

> And what do you mean "someone else"?

He who wrote the "citation".


> there is nothing to prove. Never was.

What was the point of your question, if not to prove something?

If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.

If you were asking to waste time, then it worked.

If you had another motive, what was it?

Are we having a 5d chess game? I thought it was a normal conversation.

> He who wrote the "citation".

Nobody? Nobody wrote a citation.

Do you mean the person that asked for a citation? If so, you're wrong. Finding evidence for your own claims would not make you a middleman. They didn't want to have a discussion with someone else, they wanted a discussion with you, and for that discussion to have evidence. Citing evidence is not passing the buck to someone else, it's an important part of making claims.


> What was the point of your question, if not to prove something?

My enjoyment. For what other reason would you spend your free time doing something?

> If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.

And if I weren't trying?

> If you were asking to waste time, then it worked.

I ask nothing, but if you feel wasted your time, why? Why would you do such a thing?

> If you had another motive, what was it?

As before, my enjoyment. Same as you, I'm sure. What other possible reason could there be?

> Nobody? Nobody wrote a citation.

There was a request for me to refer another party who was willing to talk about the subject that was at hand – one that you made reference to ('you called the idea of finding a citation a "waste of time"'). Short memory?

> Finding evidence for your own claims would not make you a middleman.

There wasn't a request for evidence, there was a request for a citation. Those are very different things. A citation might provide some kind of pointer to help find evidence, which is what I suspect you mean, but, again, if that's what you seek then you're back to wanting to talk to someone else. If you want to talk to someone else, just go talk to them. There is no reason for me to serve as the middleman.

> it's an important part of making claims.

Nonsense. If my claim does not hold merit on its own, it doesn't merit further discovery. It's just not valuable at all. It can be left at that, or, if still enjoyable, can be talked about to the extent that remains enjoyable.

Perhaps you are under the mistaken impression that we are writing an academic research paper here? I can assure you that is not the case.


It's great that in your reply upthread you actually understood that it was a request for any kind of evidence, including evidence you just created on the spot, but now you pretend not to understand that.


What ever do you mean? There was no change in understanding. You spoke to seeking a proof in addition to a citation, the parent did not originally speak to the proof bit, only to a citation. Entirely different contexts.

In fact, you would have noticed, if you read it, that the "upstream" comment doesn't even touch on the citation at all. It is focused entirely on the proof aspect. While the parent wanted to talk about citations exclusively, at least at the onset. Very different things, very different topics.


Okay so confirmed you're going meta to dodge any actual points being made, and grind any possible progress of the conversation to a halt. Bye!


Confused by academic paper writing confirmed. What lead you down that path?

Right click > view implementations


yeah gotta use an IDE for that


I agree with your point. OP wrote:

    > Many (sometimes even all) of these interfaces have just one implementation.
They are missing that mocks are the second implementation. (It took me years to see this point.) I would say that in most of my code at work, 95+% of my interfaces only have a single implementation for the production code, but any/all of them can have a second implementation when mocking for unit tests.


> Many (sometimes even all) of these interfaces have just one implementation.


The point of using a mockable interface, even if there's only one real implementation, is to test the behavior of the caller in isolation without also testing the behavior of the callee.

This can be overdone of course, not everything needs this level of separation, but if it makes testing one or both sides easier, then it's usually worth it. It's especially useful for testing nontrivial interactions with other people's code, such as libraries for services that connect to real infrastructure.


Did you miss "just one implementation"? A mock is literally defined by being another implementation. If the 'mock' is your sole implementation, we don't call it a mock, that's just a plain old regular implementation.


I think my comment was clear on the distinction between real and mock implementations. If the code was testable with no need for mocks then certainly remove the interface and devirtualize the method calls.


Your comment was clear about mocks, but not why mocks are relevant to the topic at hand. The original comment was equally clear that it was in reference to where there is only one implementation. In fact, just to make sure you didn't overlook that bit amid the other words, the author extracted that segment out into a secondary comment about that and that alone.

Mocks, by definition, are always a supplemental implementation – in other words, where there is two or more implementations. What you failed to make clear is why you would bring up mocks at all. Where is the relevance in a discussion about single implementations the other commenter has observed? I wondered if you had missed (twice!) the "one implementation" part, but it seems you deny that, making this ordeal even stranger.


It is easy to generate mock implementation code (GoMock has mockgen, testify has mockery, etc.) The lack of a hand-rolled mock implementation doesn't mean that much. For example, many people do not like to put generated code under source control. So, just because you don't see a mock implementation right away doesn't mean one isn't meant to be there. Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it, but not had the time to write the tests or generate the mocks. If we are going to be this pedantic, I did say "mockable" interface, implying the usefulness and possibility, but not necessarily existence, of a mock implementation.

Since we are examining code we can't see, we can only speak about it in the abstract. That means the discussion may be broader than just what one person contributes to it. If this offends you or the OP, that was not the intent, but in the spirit of constructive discussion, if you find my response so unhelpful, it is better to disregard it and move along than to repeat the same point over and over again.


> It is easy to generate mock implementation code

Not in any reusable way. Take a look at mockgen and testify: All they do is provide a mechanism to push implementation into being defined at runtime by user code. So, if they, or something like it, is in use the implementation is still necessarily there for all to see.

> Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it

Okay, sure, but this is exactly what the commenter replied to was talking about initially. What is a repetition of what he said meant to convey?

> That means the discussion may be broader than just what one person contributes to it.

Hence why we're asking where the relevance is. There very well may be something broader to consider here, but what that is remains unclear. Mocking in and of itself is not in any way interesting. Especially when you could say all the very same things about stubs, spies, fakes, etc. yet nobody is talking about those, and for good reason.

> If this offends you

For what logical reason would an internet comment offend?


Agreed. It's not as 'traditional Go' but I find there is way less interface boilerplate if you just pass functions around.

ie instead of

```

type ThingDoer interface { DoThing() }

func someFunction(thingDoer ThingDoer) { ThingDoer.doThing() }

```

just have

```

func someFunction(doThing func()) { doThing() }

```

Then when testing you can just pass a test implementation of the 'doThing' function that just verifies it was called with the expected arguments.


Can't Go compiler statically prove that such single implementation interfaces are indeed that and devirtualize the callsites referring to them?

Either way, the problem seems to happen in most languages of today, if they (or their community) ever happen to accidentally encourage passing an opaque type abstraction over a concrete one.


I think it actually does that, but in local contexts, where this analysis is somewhat easy.

I also believe you don't actually have to prove it statically: PGO can collect enough data to e.g. add a check that a certain type is usually X, and follow a slow path otherwise


I understand that it does so when the exact type is observed - a direct call on a concrete type. But I was wondering if it performs whole-program-view optimization for interface calls. E.g. given a simple AOT-compiled C# program:

    using System.Runtime.CompilerServices;

    var bar = new Bar();
    var number = CallFoo(bar);

    Console.WriteLine(number);

    // Do not inline to prevent observing exact type
    [MethodImpl(MethodImplOptions.NoInlining)]
    static int CallFoo(Foo foo) {
        return foo.Number();
    }

    interface Foo {
        int Number();
    }

    class Bar: Foo {
        public int Number() => 42;
    }
On x86_64, 'CallFoo' compiles to:

    CMP byte ptr [RDI],DIL ;; null-check foo[0]
    MOV EAX,0x2a ;; set 42 to return value register
    RET
There is no interface call. In the above case, the linker reasons that throughout whole program only `Bar` implements `Foo` therefore all calls on `Foo` can be replaced with direct calls on `Bar`, which are then subject to other optimizations like inlining.

In fact, if we add and reference a second implementation of `Foo` - `Baz` which returns 8, `CallFoo` becomes

    ;; calculate the addr. of Bar's methodtable pointer
    LEA    RAX,[DevirtExample_Bar::vtable]
    MOV    ECX,0x8 ;; set ECX to 8
    MOV    EDX,0x2a ;; set EDX to 42
    ;; compare methodtable pointer of foo instance with Bar's
    CMP    qword ptr [RDI],RAX
    ;; set return register EAX to value of EDX, containing 42
    MOV    EAX,EDX
    ;; if comparison is false, set EAX to value of ECX containing 8 instead
    CMOVNZ EAX,ECX
    RET
Which is effectively 'return foo is Bar ? 42 : 8;'.

Despite my criticism of Go's capabilities, I am interested in how its implementation is evolving. I know it has the feature to manually gather a static PGO profile and then apply it to compilation which will insert guarded devirtualization fast-paths on interface calls, like what OpenJDK's HotSpot and .NET's JIT do automatically. But I was wondering whether it was doing any whole-program view or inter-procedural optimizations that can be very effective with "frozen world single static module" which both Go and .NET AOT compilations are.

EDIT: To answer my own question, I verified the same for Go. Given simple Go program:

    package main

    import (
        "fmt"
    )

    func main() {
        bar := &Bar{}
        num1 := callFoo(bar)

        fmt.Println(num1)
    }

    //go:noinline
    func callFoo(foo Foo) int {
        return foo.Number()
    }

    type Foo interface {
        Number() int
    }

    type Bar struct{}

    func (b *Bar) Number() int {
        return 42
    }
'callFoo' compiles to

    CMP        RSP,qword ptr [R14 + 0x10]
    JBE        LAB_0108ca68
    PUSH       RBP
    MOV        RBP,RSP
    SUB        RSP,0x8
    MOV        qword ptr [RSP + foo_spill.tab],RAX
    MOV        qword ptr [RSP + foo_spill.data],RBX
    MOV        RCX,qword ptr [RAX + 0x18] ;; load vtable slot?
    MOV        RAX,RBX
    NOP
    CALL       RCX ;; call the address loaded from the vtable?
    ADD        RSP,0x8
    POP        RBP
    RET
    LAB_0108ca68                                    XREF[1]:
    MOV        qword ptr [RSP + foo_spill.tab],RAX
    MOV        qword ptr [RSP + foo_spill.data],RBX
    CALL       runtime.morestack_noctxt                 
    MOV        RAX,qword ptr [RSP + foo_spill.tab]
    MOV        RBX,qword ptr [RSP + foo_spill.data]
    JMP        main.callFoo
It appears that no devirtualization takes place of this kind. Writing about this, it makes for an interesting thought experiment what it would take to introduce a CIL back-end for Go (including proper export of types, and what about structurally matched interfaces?) and AOT compile it with .NET.

[0]: VMs like OpenJDK and .NET make hardware exception-based null-checks. That is, a SIGSEGV handler is registered and then pointers that need to throw NRE or NPE either do so via induced loads from memory like above or just by virtue of dereferencing a field out of an object reference. If a pointer is null, this causes SIGSEGV, where then a handler looks if the address of the invalid pointer is within first, say, 64KiB of address space. If it is, the VM logic kicks in that recovers the execution state and performs managed exception handling such as running `finally` blocks and resuming the execution from the corresponding `catch` handler.


Yeah too much concurrency and too many channels definitely hit home hard...


I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?


I think it depends on what kind of data you're dealing with. If you know the shape of your data, it's pretty trivial to create a struct with json tags and serialize/deserialize into that struct. But if you're dealing with data of an unknown shape it can be tricky to work with that. In Python because of dynamic typing and dicts it's a little easier to deserialize arbitrary data.

Go's net/http is also slightly lower level. You have to concern yourself directly with some of the plumbing and complexity of making an http request and how to handle failures that can occur. Whereas in Python you can use the requests lib and fire off a request and a lot of that extra plumbing just happens for free and you don't have to deal with any of the extra complexity if you don't want to.

I find Go to be bad for interviewing in a lot of cases because you get bogged down with minutiae instead of working directly towards solving the exact problem presented in the interview. But that minutiae is also what makes Go nice to work with on real projects because you're often forced into writing safer code


It comes down to how the standard library makes you do things. I don't think there's any reason why a more stringly-typed way of handling JSON (or, indeed, a more high-level way of using HTTP) is outside of the realm of possibility for Go. It's just that the standard library authors saw fit not to pursue that avenue.

This variability is honestly one of the reasons why I dislike interviews that require me to synthesize solutions to toy problems in a very tightly constrained window of time, particularly if the recruiter makes me commit at the outset to one language over another as part of the overall interview process. It's frustrating for me, and, having been on the other side, it's noisy for the interviewer.

(In fact, my favorite interview loop of all time required that I use gdb to dig into a diabolical system that was crashing, along with some serious code spelunking. The rationale was that, if I'm good with a debugger and adept at reading the source that's in front of me, the final third of synthesizing code solutions to problems and conforming to institutional practice can be dealt with once I'm in the door.)


My favourite tech interview (so far) was similar: "here's the FOSS code base we're working on. This issue looks like about the size we can tackle in the time we have. Let's work on this together and solve it".

I got to show how I could grok a code base and work out where the problem was quickly, and work out a solution to the problem, and how I understood how to contribute a PR. Way better than random Leetcode bullshit, and actually useful: the issue was actually solved and the PR accepted.


I'm not a fan of this approach because candidates may see it as a "cheap" way to do actual work without being payed.


I like your story about debugging during an interview. I can say from experience, you always have one teammate that can just debug any problem. I am always impressed to watch and learn new techniques from them.


This has also been my experience, yeah. My interviewers were very interested in watching me rifle through that core dump. (:

Ultimately, it feels to me like selecting for people who both can navigate existing code and interrogate a running system (or, minimally, one that had gone off the rails and left clues as to why) is the right way to go. It has interesting knock-on effects throughout the organization (in, like, say, product support and quality assurance) that are direly understated.


In our case we give some high-level description beforehand (which mentions working with REST apis) and allow candidates to use any language of their choice.

Also in our case the API has typing in form of generated documentation and example responses. I even saw one Go-candidate copying a response into some web tool to generate Go code to parse that form of json.

I can also say that people who chose Java usually have even more problems, they start by creating 3-4 classes just to follow Spring patterns.


I think other languages cause folks to understand JSON responses as a big bag of keys and values, which have many convenient ways of being represented in those languages. When you get to Go and you want to parse a JSON response, it has to be a well-defined thing that you understand ahead of time, but I also think you adapt when doing this more than once in Go.


If I had one complaint, it’s the use of ‘tags’ to configure how json is handled on a struct, such that it basically becomes part of the struct’s type. It can lead to a fair bit of duplication of structs whose only difference is the json handling, or otherwise a lot of boilerplate code with custom marshal/unmarshal methods. In some cases the advice is even to do parse the json into a map, do the conversion, and then serialise it again!

The case I ran into is where one API returned camelCase json but we wanted snake_case instead. Had to basically create another struct type with different json tags, rather than having something like decoders and encoders that can configure the output.

I like Go and a lot of the decisions it makes, but it has its fair share of pain points because of early decisions made in its design that results in repetitive and overly imperative code, and while that does help create code that is clear and comprehensible (mostly), it can distract attention away from the intended behaviour of the code.


As an aside, you may be interested in some of the ongoing work to improve the Go JSON serializer/deserializer:

https://pkg.go.dev/github.com/go-json-experiment/json


That’s some good news that will hopefully smooth json handling out.


You could wrap it in another struct and use a custom MarshalJSON implementation.


    var res map[string]any
    err := json.Unmarshal(&res)


Uh huh....and what comes next?

Trying to descend more than a couple of layers into a GoLang JSON object is a mess of casts.


Well, one used to have https://github.com/mitchellh/mapstructure which assisted here, but the lib then got abandoned.



The same things that happens in JS or python.

If you get a key wrong it throws (panics in go)


I wasn't talking about getting the keys wrong, but rather the insane verbosity of GoLang - `myVariable := retrievedObject.(map[string]interface{})["firstLevelKey"].(map[string]interface{})["secondLevelKey"].(string)` vs. `myVariable = retrievedObject["firstLevelKey"]["secondLevelKey"]`

"Oh, but that's just how it is in strongly-typed languages" - that may well be true, but we're comparing "JS or python" with GoLang here.


Especially when you're not certain of the type used for numbers.


> I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?

Have you considered that your interview process is actually the problem? Focus on the candidate’s projects, or their past work experience, rather than forcing them to jump through arbitrary leet code challenges.


Making an HTTP request and dealing with JSON data is a weed-out question at best. Not sure if you are interpreting the grandparent comment as actually having them write a JSON parser, but I don't think that's what they meant.


I either had that come up in an interview recently myself, OR it wasn't clear to me that I was allowed to use encodings/json to parse the json and then deal with that. I happened to bomb that part of the interview spectacularly because I haven't written a complex structure parser in years given every language I've used for such tasks ships with proper and optimized libraries to do that.


Well these are not arbitrary, we work with a number of json apis on a weekly basis, supporting the ones we have and integrating new ones as well. This is a basic skill we are looking for, and I don't see it as a "leet code challenge".

Candidates might have great deal of experience debugging assembly code or generating 3d models, but we just don't have tasks like that.


There is a dict-equivalent data type in Go for JSON (it's `map[string]any`), it's just rather counter-intuitive.

However, as a Go developer, I'm one of the people who consider that JSON support in Go should be burnt down and rebuilt from scratch. It's both limited, annoying, full of nasty surprises, hard to debug and slow.


There was a detailed proposal to introduce encoding/json/v2 last year but I don't know how far it's progressed since then (which you probably already know about but mentioning it here for others):

https://github.com/golang/go/discussions/63397


I've done, literally, hundreds and hundreds of coding interviews and an http api was a part of lots of them. Exported vs non-exported fields and json tags are about the only issues I've seen candidates hit in Go and I would just help in those kinds of cases. Python is marginally easier for many.

The problem was java devs. Out of dozens upon dozens of java devs asked to hit a json api concurrently and combine results, nearly ZERO java devs, including former google employees, could do this. Json handling and error handling especially confounded them.


Hold on, did you just say Go doesn't have a Dictionary data type?

I'm a Javascript, Lua, Python, and C# guy and Dict is my whole world.


It does. https://go.dev/blog/maps

What the poster was alluding to is that you usually prefer to deseialize to a struct rather than a record/dict/map


Not a programmer, so this is every programmer's chance to hammer me on correctness.

No, Go doesn't have a type named Dict, or Hash (my Perl is leaking), or whatever.

It does have a map type[1], where you can define your keys as one type, and your values of another type, and that pretty closely approximates Dicts in other languages, I think.

[1]: https://go.dev/blog/maps


So, these types (and many more) are hash tables.

https://en.wikipedia.org/wiki/Hash_table

They're a very common and useful idea from Computer Science so you will find them in pretty much any modern language, there are a lot of variations on this idea, but the core idea recurs everywhere.


I have a quibble here. A hash table, the basic CS data structure, is not a two-dimensional data structure like a map, it is a one-dimensional data structure like a list. You can use a hash table to implement a Map/Dictionary, and essentially everyone does that. Sets are also often implemented using a hash table.

The basic operations of a hash table are adding a new item to the hash table, and checking if an item is present (potentially removing it as well). A hash table doesn't naturally have a `V get(key K)` function, it only naturally has a `bool isPresent(K item)` function.

This is all relevant because not all maps use hash tables (e.g. Java has TreeMap as well, which uses a red-black tree to store the keys). And there are uses of hash tables besides maps, such as a HashSet.

Edit: the term "table" in the name refers to the internal two-dimensional structure: it stores a hash, and for each hash, a (list of) key(s) corresponding to that hash. Storing a value alongside the key is a third "dimension".


I think I'd want to try to decode into map[string]interface{} (offhand), since string keys can be coerced to that in any event (they're strings in the stream, quoted or otherwise), and a key can hold any valid json scalar, array, or object (another json sub-string).


That of course works, but the problem is then using this. Take a simple JSON like `{"list": [{"field": 8}]}`. To retrieve that value of 8, your Go code will look sort of like this:

  var v map[string]any
  json.Unmarshal(myjson, &v)
  lst := v["list"].([]any)
  firstItem := lst[0].(map[string]any)
  field := firstItem["field"].(float64)
And this is without any error checking (this code will panic if myjson isn't a json byte array, if the keys and types don't match, or if the list is empty). If you want to add error checking to avoid panics, it gets much longer [0].

Here is the equivalent Python with full error checking:

  try :
    v = json.loads(myjson)
    field = v["list"][0]["list"]
  except Exception as e:
    print(f"Failed parsing json: {e}")
[0] https://go.dev/play/p/xkspENB80JZ


And, if you hate strong typing, there's always map[string]any.


Really, the mismatch is at the JSON side; arbitrary JSON is the opposite of strongly typed. How a language lets you handle the (easily fallible) process of "JSON -> arbitrarily typed -> the actual type you wanted" is what matters.


    > arbitrary JSON is the opposite of strongly typed
On the surface, I agree. In practice, many big enterprise systems use highly dynamic JSON payloads where new fields are added and changed all the time.


Go has had a dict-like data type from the jump; they're called "maps" in Go.

Some of early Go's design decisions were kinda stupid, but they didn't screw that one up.


Go has maps, json parsing and http built in. I'm not exactly sure what this person is referring to. Perhaps they are mostly interviewing beginners?


Go maps have a defined type (like map[string]string), so you can only put values of that type in them. A JSON object with (e.g) numbers in it will fail if you try and parse that into a map of strings.

As others have said, the issue with Go parsing JSON is that Go doesn't handle unstructured data at all well, and most other languages consider JSON to be unstructured data. Go expects the JSON to be strongly typed and rigidly defined, mirroring a struct in the Go code that it can use as a receiver for the values.

There are techniques for handling this, but they're not obvious and usually learned by painful experience. This is not all Go's fault - there are too many endpoints out there that return wildly variable JSON depending on context.


I feel like good JSON handling is sort of table stakes for any language for me these days.

The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.


> The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.

Unless you're dealing with JSON input that has missing fields, or unexpected fields, there is no pain. Go can natively turn a JSON payload into a struct as long as the payload's fields recursively match the struct's fields!

If, in any language, you're consuming or generating JSON that doesn't match a specific predetermined structure, you're yolo'ing it and all bets are off. Go makes this particualr footgun hard to do, while JS, Python, etc makes it the default.

Default footguns are a bad idea, not a good idea.


This.

In $other_language you'll parse the JSON fine, but then smack into problems when the field you're expecting to be there isn't, or is in the wrong format, or the wrong type, etc.

In Go, as always, this is up front and explicit. You hit that problem when you parse the JSON, not later when you try to use the resulting data.


Go's JSON decoder only cares if the fields that match have the expected JSON type (as in, list, object, floating point number, integer, or string). Anything else is ignored, and you'll just get bizarre data when you work with it later.

For example, this will parse just fine [0]:

  type myvalue struct {
    First int `json:"first"`
  }

  type myobj struct {
    List []myvalue `json:"list"`
  }
  js := "{\"list\": [{\"second\": \"cde\"}]}"
  var obj myobj
  err := json.Unmarshal([]byte(js), &obj)
  if err != nil {
    return fmt.Errorf("Error unmarshalling: %+v", err)
  }
  fmt.Printf("The expected value was %+v", obj) //prints {List:[{First:0}]}
This is arguably worse than what you'd get in Python if you tried to access the key "first".

[0] https://go.dev/play/p/m0J2wVyMRkd


It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

To me it looks like a footgun: if the parsing failed then an error should have been signalled. In this case, there is no error and you silently get the wrong value.


Yeah, I presented that wrong. It's not actually a failure as such.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

I do agree that there are good reasons on why this behaves the way it does, but I don't think the reason you cite is good. The implementation detail of generating a 0 value is not a good reason for why you'd implement JSON decoding like this.

Instead, the reason this is not a completely inane choice is that it is sometimes useful to simply not include keys that are meant to have a default value. This is a common practice in web APIs, to avoid excessive verbosity; and it is explicitly encoded in standards like OpenAPI (where you can specify whether a field of an object is required or not).

On the implementation side, I can then get away with always decoding to a single struct, I don't have to define specific structs for each field or combination of fields.

Ideally, this would have been an optional feature, where you could specify in the struct definition whether a fields is required or not (e.g. something like `json:"fieldName;required"` or `json:"fieldName;optional"`). Parsing would fail if any required field was not present in the JSON. However, this would have been more work on the Go team, and they generally prefer to implement something that works and be done with it, rather than working for all important cases.

Separately, ignoring extra fields in the JSON that don't match any fields in the struct is pretty useful for maintaining backwards compatibility. Adding extra fields should not generally break backwards compatibility.

> One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.

I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?


> I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?

No, I mean you create a struct that deals with only a part of the JSON, and do multiple calls to Unmarshal. Each struct gets either populated or left at its zero-value depending on what the json looks like. It's useful for parsing json data that has a variable schema depending on what the result was.


Umm, you can unmarshall into a map[string]any, you know ?

    dataMap := make(map[string]any)
    err = json.Unmarshal(bytes, &dataMap)


You can, but then it's a lot of work to actually traverse that map, especially if you want error handling. Here is how it looks like for a pretty basic JSON string: https://go.dev/play/p/xkspENB80JZ. It's ~50 lines of code to access a single key in a three-layer-deep JSON.


Its more like 30 lines of code without the prints. However, one generally should code generic functions for this. The k8s apimachinery module has helper functions which is useful for this sort of stuff. Ex: `NestedFieldNoCopy` and its wrapper functions.

https://github.com/kubernetes/apimachinery/blob/95b78024e3fe...

Ideally, such `nested` map helper functions should be part of the stdlib.


Sure, in production you'd definitely want something like that, but the context was an interview exercise, I don't think you should go coding generic wrappers in that context.


It does (it's called a map) and Go does have generics, the previous poster clearly doesn't know what they're talking about.


Go has ruined all other languages for me. I really fell in love with Gleam recently and was trying to implement a fun side project in it. The problem is I really don’t have enough time to learn the intricacies of it, with a startup, two kids, etc. As soon as I have to look at some syntax and really _think_ about what it’s doing every time I look at it, I lose interest. I kept trying and eventually implemented it in Go much faster. And while doing it in Go I kept wishing I could just use actors and whatever to make it simpler but, is it really simpler?


I haven't looked too deeply into it but I came across https://github.com/ergo-services/ergo not too long ago and thought it could be pretty interesting to try using OTP in Golang

Packaging a Go service in Docker and dumping it into k8s is probably the easier/better understood path but also deploying Go services onto an Erlang node just sounds more fun


Yep.. say you wanted to make a simple http service that needs to

* request a json.gz file from another HTTP service * decompress it * deserialize the json, transform it a bit

That's net/http (and maybe crypto/tls), compress/gzip, encoding/json. I need to make zero decisions to get the thing off the ground. Are they the best libraries in the world for those things? no.. but will they work just fine for almost every use case.


Sounds like

    curl ... | jq ...
to me!

Not saying you shouldn't use Go for that problem, in a particular context, but it does drive home how much of programming is glue ... there is combinatorial amounts of glue, which is why JSON, HTTP, compression, etc. end up being part of so many problems


There’s a big difference between building something on curl and jq and building something using a language’s standard library.

Everything is just bits at the end of the day. Just about anything can do anything.


I feel the same.

Especially in the UI/UX world when you want to just start building a demo you're paralized by dozens of build toolchains in between.

Wanna get started with a starter template? Tough luck, it hasn't been updated for a year, so it takes even longer.

In go, everything is opinionater and unified upstream. Conventions matter, because they allow efficiency and reuse of patterns and architectures.


It has not been my experience that Go is good for almost everything. On the contrary, it seems good at a couple very specific (though very common) niches: network services and cli utilities. But for most of what I do right now - data heavy work - it has not turned out to be very good (IMO). It really is just not better in any way to have to constantly write manual loops to do anything.


I think Go is pretty OK as a language for building data pipelines (I’m assuming you meant statistical ones, but the same argument applies to more data transform-y ones). What it is not good for is doing exploratory analysis (which is where Python shines).

Manual loops are pretty annoying when the focus is on figuring out which loops to write (exploratory phase). However, they are pretty nice once you’ve figured it out and need write a durable bit of code where your prioritise readability over conciseness (productionisation).

Going from Python to <any language> between the exploratory phase and the productionised pipeline is going to be a pain, I don’t think Go is particularly worse than others. At that point it’s all about the classic software tradeoffs (performance vs velocity vs maintainability) and I didn’t think Go is a good choice in many situations.


Well I totally disagree that writing manual loops is ever "pretty nice", but I agree that it's not as big an issue in final-version code as it is in exploration.

And I'm also in strong agreement that making any language transition between exploration and implementation is problematic. I do think go is worse than most, because I just think it has a mostly cultural allergy to manipulating collections of data as collections rather than element-by-element, but I agree that this is mostly lost in the noise of doing any re-write into a new language.

But this is why Python is best in this space. It simply has the best promotion path from experimentation to production. It is better than other "real" languages like go, because it thrives in the exploratory phase, and it is better than purpose-specific languages, like R, because it is also a great general-purpose language.

The other contender I see is Julia, which comes more from the experimentation-focused side, while trying to become a good general purpose language, but unfortunately I think it still needs to mature a lot on that side, and it's not clear that it has the community to push it far enough fast enough in that direction (IMO).

Even very performance-critical use cases work with python, because the iteration process can follow experimentation -> productionization -> performance analysis -> fixing low-hanging bottlenecks by offloading to existing native extensions -> writing custom native extensions for the real bottlenecks.


I agree with this, there is a reason Python is the king of data.


Yeah, but Go is also worse (in my experience) than most, if not all, of the other general-purpose languages I've used, for this niche.

For instance, Rust is actually pretty great in this space, despite being very ... not-python, and Java also has decent libraries. Then C++ (and Fortran!) are good for a very different reason than python, "on the metal" performance, which Go also isn't a great fit for.


Go is not good for data science and ML. It doesn't even have a proper, maintained dataframe library for data-science. R and Python beat it hands on. Rust also beats it now thanks to polars. And mobile ? gomobile is not maintained. Fyne is amateur level on mobile.

AFAIK Go has no maintained 3d game engines.

Go has its well-established niche for middle-ware services and CLI tools. And that's about it. If your domain is outside this, its best to choose another language.


Is the reason for the absence of a well-maintained dataframe library lack of demand? It looks like Gota and Dataframe-go are abandoned, while Gonum isn't particularly active. Did these wither on the vine because no one used them?


It's likely because it's a pain to call Fortran or C (I'd be suspicious of anyone trying to reimplement openblas in go).


Why would you be suspicious?


There is no good reason to pursue a project like this (or if there is, it would be very surprising to me), so it would reek suspiciously of "not invented here".


Because more likely than not, they're going to screw it up, and the choice to use go was not made for sound engineering reasons (go is not the only language this would apply to, but because of the lack of good FFI, its more likely to happen). The exception would be if said person had a solid background in numerical computing and was up-to-date with the state of the field, but that's pretty easy to find out.


I think it's lack of demand, yeah, but I think that's downstream of a real culture clash. Exploratory data analysis is just really not a good fit, culturally, for go. You don't want to be explicit about everything and check every single error case, etc.

But then it's natural to evolve production systems out of exploratory analyses, rather than re-writing everything from scratch, unless there is a very compelling reason to do that. The compelling reason is usually to get more speed, but that's not go's strong suit either.


Also anything that requires interop/FFI, syscalls, and lower level stuff. It's extremely hard to record your screen on Go, for example. On Rust this is much more doable and there even are crates for it

It's not impossible with Go, though. There's an amazing windows impl here: https://github.com/kirides/go-d3d

...but if you look at the code, it's clear that you have to work against the language in some capacity


I can totally agree that Go is good enough for most projects, to the extent it's a go-to choice for many. However, it's not always the best tool - frequently it wins just because it allows to prototype quicker. YMMV, but for me, it's not a love relationship, it's a love-hate relationship.

> The language is extremely simple.

Simplicity is a double-edged sword. It didn't even have basic generics for a long while, and it's painful to remember how bad it was without them. Still doesn't have a lot of stuff anyone would expect from a modern language.

> If you know 20% of C you are already a Go expert.

Strong disagree. Knowing $language means knowing the patterns and caveats and (sorry for a cliche term, I'm not fond of it, but I don't have a better term so I hope you get the meaning) "best practices". Those are drastically different between C and Go. Especially when it comes to concurrency and parallelism.

> The toolchain and the core libraries alone can do 90% of what most people will ever need.

This is provably false. Virtually every serious project out there pulls a ton of dependencies, like database drivers, configuration toolkits, tracing and logging libraries and so on. Heck, I think a lot of shops have project templates for this reason - to standardize on the layout and pull the company-preferred set of libraries for common stuff. Core libraries are slowly getting there but so far they don't have very basic stuff like MapSet[T] or JSON codecs for nullable types like sql.NullString, so you gotta pull third-party stuff.


"Good enough" plus "I already know it" almost always beats the perfect tool that I don't already know.


There is obviously truth in this, but I think it is more often the case than many people think it is that it is more efficient to learn the better tool "good enough" than to use the worse "I already know it" tool.

For instance, I've seen lots of people not want to learn sql and instead write complicated imperative implementations of relational primitives in the "I already know it" programming languages that are clearly "good enough" (because they are turing complete after all). But sql is usually a much better way to do this, and isn't hard to learn "good enough" to do most things.


Well, there's a balance. I'm not saying "never learn anything new, assembly's good enough and I know how to use it". Learn newer and better tools.

At the same time, don't learn every newer and better tool. There are too many. You don't have enough time, even if you never do anything but learn.

SQL is enough better to be worth learning. The web framework of the week? Not so much.

And what's "worth learning" depends on what you're trying to do. For a home project, I'll use what I know, unless the goal of the project is "learn how to use X". For work, the question is whether it brings enough to the table to be worth the learning time. Sometimes it is; often it isn't.


Yeah what I'd say is: Seek out and be open to advice. There's a "don't know what you don't know" problem here, as always. But this is also part of the point of reading sites like HN! People here are saying "actually there are tools that are net positive to learn a bit because they are much better choices for particular niches". That is advice! It's fine and all to say "nah, I'm good", but in many cases that's doing yourself a disservice. I really do see people writing tedious for loops in go because it is what they're comfortable with, when they would be much better served writing sql and using a language with dataframes.

Most of the time people aren't just on a kick about selling some hot new thing (and I'm old enough that go was the hot new thing for me at one point!), they actually have relevant experience and are giving useful advice.


> When people argue about the validity of these claims, I simply point them to this talk https://go.dev/talks/2012/concurrency.slide#42

There's nothing impressive in these slides. This may have been pretty good in 2012, but this code looks very much like the equivalent Swift or Rust.


It's better in Swift or Rust. Go channels are the worst thing about the language, even trivial use cases are difficult to get right.


Personally I don't find it great for prototyping. There's just too much boilerplate, which is why I use other languages for fun/personal projects.


Absolutely, it's my least favorite prototyping environment. It forces me to think about a bunch of stuff I don't want to think about up front.


Is there much boilerplate aside from err checks and JSON tags? Even then, your IDE / copilot should automatically insert those along with imports and package names.


Err checks are a big one. I don't want to worry about error handling when prototyping. There are little things like having to prefix methods with the struct name and type, and bigger things like no default arguments and by name parameters, which makes setting up test fixtures cumbersome. Also, functions don't compose well because there can be more than one return value, so you end up just writing more intermediate values.


The requirement to define at least one function by itself is a boilerplate. Also IDE doesn't fully solve the inability to compile and run a partially written program (in fact, Go compiler is even more pedantic than rustc in some aspect), which happens a lot when working with dynamically typed languages and their strongest use cases.


My personal Python threshold is 10k lines. After that I tend to loose track of what I am doing and I start to miss static typing and nowadays, an IDE to navigate it. Maybe future Python IDEs can AI scan the codebase and compensate.


Type annotations plus modern IDEs have (in my experience) made this mostly a non-issue. It does require a bit more setup to get passable static analysis set up, which comes "for free" in languages with good compilers (like go!), but it's at least possible now to get to a (IMO) good point.

(FWIW, I moved away from python and ruby over a decade ago because of exactly this frustration, but I'm finding modern python to be pretty pleasant.)


Wait... in what year was this comment written ? ;)

There's type hinting and IDEs do navigate it.


optional hints

Both make them much less useful than in a typical statically typed language. Besides that, the tooling sucked hard not too long ago.


Agreed on all points except #3 re the core libraries. Coming from the Java ecosystem, it was a bit of a shock to see how small the standard libraries are. For example, the minuscule collections library, among others.


Go has very little story when it comes to desktop or mobile GUI apps, which is too bad because it would be a very productive language for that kind of thing.


This makes me wonder: why there's less effort on mobile side? For web, well there are plenty of stuffs.

Perhaps Go devs are not interested in native solution (like Kotlin/Swift etc) and prefer web stack (JS, React etc)?


> Once your project exceeds 100 lines

100 lines in which language? 100 lines in Go will probably have 50 lines of if err != nil :P


> The author lists multiple reasons for this, but for me the biggest one is the first one: Go is good for almost everything.

I'd nuance that claim.

I haven't found Go to be _particularly good_ at any specific task I've undertaken, but Go was _good enough_ for many of these tasks. Which makes Go a reasonable general programming language.


This is true, but there are a number of _good enough_ languages, and personally I don't think go is top-tier at this use case of being the go-to swiss army knife. I do think it is top-tier at being a good choice for tools in its niche. But not as "default language I reach for when I don't want to waste time thinking about it".


Agreed. For my personal sensibilities, Python or TypeScript are better default languages. Of course, I'm a bit obsessive about quality/safety, so I'm probably going to use Rust for most tasks :)


I'm asking this earnestly but is Go suitable for native GUI apps (not web)? 3D graphics/Games? Audio processing?


I've written a GUI in Fyne, it's decent but go is not a first class GUI language. Fyne is... fine but building code is annoying.

See https://GitHub.com/ssebs/go-mmp


Yes, because one can either turn off the GC, allocate memory up front (arena style), or call C or Assembly from Go.

The performance is likely to be good enough with GC turned on, though, because it is unusually fast.


Turning off the GC isn't the blocker. Plenty of GC languages manage.

I'm more interested in how Go handles graphics APIs and binding a render thread or working with GUI threads considering how goroutines are done. Does one need to write non-idiomatic go, avoiding goroutines or is there some trick?

For example, GTK isn't thread safe so you can't just use that library without considering that.


No, go is not for native gui apps. I recently made some rough go bindings to minifb and while easy to do, it wasn't productive at all. Errors are hard to follow; are they go or minifb? Callbacks work until I have too many calls then the app might freeze altogether.

Go is great for image/draw and things like passing pixels: (C.uint)buffer.Pix

It comes down to Google wanting Go to use the web as the interface, which in practice means not doing dynamic linking (except in Windows.)

To your question, go gui apps will have 'runtime.LockOsThread()' near the start so it's all green/light/go threads and only 1 OS thread.


Offhand, I think the general pattern is to bind related external libraries (E.G. a gui stack) into one agent thread and use channels to send it messages.


For games potentially Ebiten, I believe it has some audio processing support too.


Probably not Ebiten for 3d games. To be fair at this point when you are doing somewhat specialist things Go starts to lose its edge. I remember trying to replicate some Numpy code in Go and that was a pain. However that's just because Python is too good at scientific things.


People write games and GUI apps in Python, so why not?


Is there some particular reason Python is similar to go with regards to GUIs or are you saying everything can do everything?


Python is a slow interpreted language (I write Python every day). Go is compiled and an order of magnitude faster for most everything.

So, I guess, yeah, everything can do everything. Computers are fast where language is rarely going to be the deciding factor.


4. Stability and backwards-compatibility. I have never seen a Go version upgrade break anything. Meanwhile I have colleagues who do a thousand-yard-stare if you so much as mention upgrading the version of Python we're using.


Python upgrades are fine, actually. The python library ecosystem is a bit of a mess in general, which does affect this, but the tools have actually improved to make this more manageable lately.


Yeah, this. It's just good enough for 95% of use-cases, while being very productive.

Personally, one of the biggest selling points for me is that imo modelling concurrency and asynchronicity via fibers (goroutines), rather than async/await, is just a ton easier and faster to work with. Again, there are use-cases for the alternative (mainly performance, or if you like to express everything in your type-system) but it's just great for 95% of use-cases.


I always find it odd when people refer to it as being "very productive". I find all the boilerplate very un-productive to work with. Every time I pick go back up I'm shocked how many manual loops and conditionals I have to write.


How is it different than, say, java for this generalist purpose?


IME, there are two main differences between go and java:

1) go is more "batteries included". Modules, linting, testing, and much more are all part of the standard cli. Also, the go stdlib has a ton of stuff; in java, there is almost always a well-built third party library, but that requires you to find and learn more things instead of just reaching for stdlib every time.

2) golang is "newer" and "more refined". this is pretty subjective, but golang seems to have fewer features and the features are more well-planned. It's a more "compact" or "ergonomic" language. Whereas java has built up a lot of different features and not all of them are great. You can always ignore the java features you don't like ofc, but this is still a bit of cognitive overhead and increased learning time.


There are surprisingly few languages in this category, especially if you limit consideration to statically typed. Go, C#, Swift? Nim, Crystal? F#? Kotlin?

Go is not my favorite language, but it really is exceptional in terms of its effective utilitarian design.


Eh, imo the go libraries still aren't up to par with out of the box java libraries. Like there's still no Set class, nor the equivalent of Map.keys. yeah they're easy to write but that's still not an included battery.

Also, while the cli to add stuff is useful, there's still nothing to the level of maven or gradle for dependency management, and I usually find myself doing some fun stuff with `find -execdir` for module management.

Different strokes for different folks though. Java (really, kotlin) still makes a ton of sense for backend to me given how the jvm is architecture independent and you don't have to make tradeoffs/switch to graal if it's a long lived service.

Golang is nice, love it, but it's still got a bit to grow. I'm just happy they added modules and generics. I don't think it's a matter of being 'well thought out' as much as it's a simple language that cares a lot about simplicity and backwards compatibility and has iterated ever since. For my the killer app isn't go routines so much as you can produce a binary that's resilient even to shared/dynamically linked libraries in all platforms, which is awesome for portability independent of environment. No more gcc vs clang vs msvc headaches, no more incompatible shared libraries, no more wrong version of jvm or a bad python modules path etc.

Oh, also java had like a 15 year headstart on golang, and it wasn't until java 8 that many of my biggest complaints were addressed. And yeah stuff like apache commons +log4j+mockito/junit are pretty much required dependencies, and maven/gradle aren't language native.

The best STL is probably python imo but even that doesn't support a proper heap/priority queue inplementation. For data structures specifically I think java/kotlin has the best STL. All of this ignoring .NET or apple platforms.


maps.Keys is coming in a few days with 1.23.

I generally agree with what you are saying. Although I wouldn’t hold out Maven as a paragon. I used to make my living untangling pom.xml files. I don’t think anybody is feeding their family helping people with go.mod messes — although I wish somebody would do that for kubernetes.


> maps.Keys is coming in a few days with 1.23.

Yup! Lots of good stuff in the go/x libraries, happy map.Keys is graduating.

Also totally agreed on maven not being an ideal, or even lovable dependency management system, but trust me that there are absolutely people spending too much time wrangling go.mod files, especially in open source where go.mod files cross repo boundaries (and thus consolidated test automation).

I typically do a bunch of git reset --hard's as I iterate on my find -exec to run the golang cli commands for module upgrades (for whenever rennovate fails). I like how it's much easier to experiment with over maven (maybe I just didn't know maven well enough), but it's still definitely a headache.


> Like there's still no Set class

map[T]struct{} ?


So, like they said, no Set class.


https://github.com/deckarep/golang-set offers a thread safety option and methods like contains all, intersect, and equal.


There is also https://github.com/hashicorp/go-set which includes HashSet and TreeSet implementations for types with custom hashing functions, and orderable data respectively.


You do need to buy a lot of batteries with go.


FWIW, I think versions of java from the last 5ish years feel both "newer" and "more refined" than go.

But I do think go and java are very comparable languages. To me, go's advantage over java is more about use-case; go is the clear choice for little cli tools, because it's pretty far off the beaten path to coax java to start up quickly enough for this. This is the sweet spot for go, IMO.


As someone who isn't super proficient in Java I usually find Java daunting to get started with full of buckets of "meta" issues like in my other comment.

What JVM do I use? Does it matter?

Does it matter what version I install, what if I have to install/manage multiple versions?

If I want to write a web service can I use vanilla Java stdlib or do I have to use Spring or some framework? If I use Spring, do I have to get into the weeds of dependency injection and other complexity to actually get my app off the ground?

With Go, none of those questions exist. I install the latest Go, create a main.go file, I use net/http and I'm off to the races.


I think it's good that Go, among other well-made toolchains, brought attention to importance of good CLI UX.

But it's not something that is unique to Go:

    dotnet new web
    dotnet run
    curl localhost:5050
also 'dotnet watch' for hot-reload.


> What JVM do I use?

latest LTS (longterm-support) version, so currently 21, next 25 (late 2025).

> Does it matter?

new is always better

> what if I have to install/manage multiple versions?

https://sdkman.io/

> If I want to write a web service can I use vanilla Java stdlib or do I have to use Spring or some framework?

Spring

> If I use Spring, do I have to get into the weeds of dependency injection

yes, but it's not hard (or I've done it long enough)

> other complexity

that might just be inherent complexity when have to deal with a webservice. personally I dislike have to configure security

So basically go to Spring Initializr (https://start.spring.io/) pick the latest LTS Java&Maven (or Kotlin&Gradle-Kotlin) and You're off to the races.


Besides what neonsunset points out for .NET world, where alongside C#, we get the pleasure to enjoy F#, VB and C++/CLI, it is relatively easy for Java.

When one doesn't know, just like with Go, one picks up the reference implementation => OpenJDK.

For basic stuff, the standard library also has you covered in the jdk.httpserver module.

By the way, where is the Swing equivalent on Go's standard library?


Just from an ergonomic standpoint it's a million times easier to deploy a go binary instead of a whole jvm and a jar file.


When it is a pure Go application, only as CLI or UNIX server.


java on its own is not a competitor to go, IMO, due to the batteries included "culture" in the go ecosystem.

I would need to compare it with, for example, Java + Spring(Boot).

I find Go to be simpler and more pleasant to use.


I agree with Go being simpler, but modern Java and Spring Boot is also fine. Backend programmers are spoiled with riches nowadays with all the completely workable options we have.


Batteries included but there’s not even a Set[T] in the standard library..


Where is the GUI battery in Go?


Dependencies in Java are a pain in the ass.

Publishing libraries for your own consumption is even harder.

Publishing libraries for world consumption means now you also need to be a PKI expert.


Builtins well thought out? Have you seen the bignumber api?


> Go is good for almost everything.

Java is good in more areas than go


Go is everything I don’t want in a language for my personal projects. It’s verbose, every simple task feels like a lot to write. It’s not expressive, what would be a one-liner in Python makes you write three for loops in Go. I constantly need to find workarounds for the lack of proper enums, lack of sum types, no null safety etc.

I’m sure these are the exact reasons why Go is good for enterprise software, but for personal projects, I get no fun out of using it


That's why I love Go so much.

You want to write a very clever library using elegant abstraction and generics to have a cool innovative interface to solve your problem? Tough luck, you can't. So instead, you will just have to write a bog standard implementation with for-loops and good old functions which you will have to copy and tweak as needed where you really need something more complicated. It will work perfectly fine and end up being very readable for you and for the other persons reading your code.

Go is basically anti-Haskell. It forces you to be less clever and that's great.


Take a look at a JSON parser or ORM written in Go. It's god awful the things they have to do to work around Go's type system. The average developer won't see these things, they're typically just writing glue code between Go's great stdlib (which also contains wild things if you take a look) and other 3rd party dependencies.


> The average developer won't see these things, they're typically just writing glue code between Go's great stdlib (which also contains wild things if you take a look) and other 3rd party dependencies.

This is what most of us are doing every day, and exactly what Go excels at.


Sum types allow for more robust modeling of the API boundary in libraries, so in fact having a better type system is desirable even when "just gluing libraries", because it can make incorrect program states physically unrepresentable.


Sum types are great – but Go manages to be unreasonably effective even in their absence.


> unreasonably effective

I'm really tired of this PR speak when it comes to programming languages. Why / how is it unreasonably effective? More effective than what?


If you want to progress your career you'll need to take on hard problems at some point.

Go isn't particularly unique in excelling at easy problems.


Go doesn't excel at easy problems. Go is fine at pretty much everything. Do you think Kubernetes is an easy problem?

The thing is just that Go is very opiniated in its feature set. That's you see people here writing about complex projects using "wild" or even "god awful" things, and lament the inability to properly map API boundaries in the language.

The truth is obviously that all of these is not particularly wild. It's just things that the commenter considers inelegant but is perfectly able to follow which is Go strength and why it's so code. Want it or not, you will have to write code that someone else can follow.

Don't get me wrong, I'm not going to pretend that Go is in anyway perfect or has the correct feature in as much as that exists. I probably enjoyed writing Ocaml more. But in practice, for a large scale project where collaboration is important, using Go is an awesome experience.


Go is "opinionated" because it's designed to be simple rather than complete.

- Why are lower cased symbols not exported? Because it would be too complicated to add private / public keywords.

- Why isn't there exception handling? It's too complicated. It's simpler to just have everyone manually handle exception flow.

- Why isn't there an optional type? It's too complicated. Just use a nil pointer and or default values.

- Why aren't there sets or other rich datatypes in the stdlib? It's too complicated. Now go and write it yourself or download a microlibrary.

- Why are there no nil pointer protections? It's too complicated.

It's very easy to buy into the Golang PR and say "well it's just opinionated" as opposed to calling it "simplistic" or "incomplete". It's an okay language, I've written a lot of complicated stuff in it over the last 6 or so years, including a distributed KV database. Eventually you WILL hit the limits of "opinionated" design.


> Do you think Kubernetes is an easy problem?

Kubernetes is an easy problem made hard by doing a bunch of things that don't need to be done. I've used small bash scripts to deploy software for most of my freelance career, and the few times I've been forced to use a containerization tool, it has been far more difficult, for no discernible benefit.

> The thing is just that Go is very opiniated in its feature set. That's you see people here writing about complex projects using "wild" or even "god awful" things, and lament the inability to properly map API boundaries in the language.

The problem isn't that Go is opinionated--I often wish Python was more opinionated. The problem is that Go started off with the wrong opinion on generics, and took two iterations (first casts, then `go generate`) to arrive at generics, resulting in a system that isn't opinionated on this issue, because all three ways work for reverse compatibility. And this is a) a very important issue to be opinionated on, and b) extremely forseeable given the languages that came before.

> The truth is obviously that all of these is not particularly wild. It's just things that the commenter considers inelegant but is perfectly able to follow which is Go strength and why it's so code. Want it or not, you will have to write code that someone else can follow.

The lack of abstractions means it's easy to follow on the line by line level, but that falls apart as context grows. Lines in more powerful languages are harder to follow because they do more. If you want to do the same amount of work in Go, you have to write more lines of code. You're going to be implementing the same abstractions ultimately, but because you're writing it custom you're going to do it a little differently every time. As a result, any few lines of code in Go is easy to understand, but the big picture is much harder to understand, because you're caught up in minutia which is slightly different every time instead of using a standard abstraction.

EDIT: There's no way the first person who downvoted this had time to read it.


You can write abstractions in Go and it has generics. Its just that the abstraction aren't as good so you end up with harder to read code.


You're holding the phone wrong. Kubernetes does the same. So does Docker, and just about every Go project. You don't understand Go.


I've been coding full time in it on a large team for four years now. If Go is this difficult to comprehend, then maybe its not the simple language it claims to be.


That is the joke. Go conflates simple with primitive, so you end-up building things like Computer Science has just emerged and it is the 50s with some syntax sugar, an decent concurrency to be fair.


> It will work perfectly fine and end up being very readable for you and for the other persons reading your code.

And make all future work both 3 times simpler, and take 3 times as long. I suppose there’s situations in which this is great, but I’m partial to requiring more.

I’ll admit that hasn’t worked out very well for me unless I’m working my myself though.


I'll take verbose and simple code over terse trickery every day.

People who disagree have never had to wake up at 3 in the morning to fix a critical production issue in someone else's code. And that someone else really loved "elegant and terse" code.

It's not fun to grok your brain around weird language trickery when you're half-awake and in a hurry to fix stuff before the customers wake up.


I personally think that writing so called "terse, clever" (misnomer) code, is not an issue with the language, rather the user. Do we really want to have worse tools, just because some people are writing bad code? Clearly it's an issue with the software engineering process rather than language itself. A good language should allow a skilled user to write code as clear as day, while properly modelling the problem domain and making incorrect states logically unrepresentable. We have a tool for that, type system and a compiler.


> Do we really want to have worse tools, just because some people are writing bad code?

People tend to write bad code. It's a fact of life. Tools forcing people who write bad code to write better code can't be worse tools by definition. They are better tools.


The fundamental issue is that humans contrary to machines will never know for sure whether whatever they do write is in fact correct code. One can think they are writing good and readable code, but that doesn't mean anything if the code is incorrect. And if you write lots of boilerplate that means more possible bugs. That's also why no one sane writes assembly (or increasingly these days C) unless they have to. We generally prefer more complex languages which put a constraint on the amount of possible bugs.


> is not an issue with the language, rather the user

Go actively prevents you from writing stupid code, it doesn't give you the tools to do cool code golf tricks


You don’t write terse tricky code. That’s just silly. But you can write beautiful Typescript constructs that are completely impenetrable but will disallow all wrongdoing.


"Simple" is a cop-out word. Things can be simple along a lot of vectors. The vector you've chosen seems to be "does less for you" which taken ad absurdum would have you using assembly. Go does have elegant abstractions, and they aren't the simplest along this vector, nor would anyone want them to be. Coroutines, for example, are actually quite conceptually complicated in some ways.

I prefer "understandable"--it appears this is what you're trying to get when you say "simple", but I think you're drastically overselling the understandability of go code. Sure, you understand any given line easily (what you described as "readable"), but you're not usually trying to understand one line of code. Since go's sparse feature set provides few effective tools for chunking[1] your mental model of the program, complex functionality ends up being in too large of chunks to be understood easily. This problem gets worse as programs grow in size.

Another poster mentioned that they start running into problems and wishing they had explicit types with Python programs over 10K LOC, which approximately matches my experience. But comparing to go, you've got to realize that 10K LOC of Python does a whole lot more than 10K LOC of Go; you'd have to write a lot more Go to achieve the same functionality because of all the boilerplate. That's not necessarily a downside because that boilerplate is giving you benefits, and I don't think entering your code into the computer is the limiting factor in development speed. But it does mean that a fair comparison of equally-complex programs is going to be a lot more lines of Go than Python, i.e. a fair comparison of might be 10K LOC of Python vs 50K LOC of Go. I say "might be" because I don't know what the numbers would be exactly.

How many people have written or worked on projects in Go of that complexity? How many people have written or worked on programs of equivalent complexity in other languages to compare? I'm seeing people discuss how easy it is to start a project in Go, but nobody is talking about how easy it is to maintain 50K LOC of Go.

I've worked on projects of >200K LOC in Python, and the possibly-equivalent >500K LOC in C#. I think the C# was easier to work with, but that's largely because the 200K lines of Python made heavy use of monkey patching, and I've worked in smaller C# codebases that made heavy use of dependency injection to similar detriment. I'm honestly not sure which feels more maintainable to me, given a certain level of discipline to not use certain misfeatures.

I haven't written as much Go, and I wouldn't, because the features of C# which make it viable for projects of this complexity simply aren't present, and unlike Python, Go doesn't provide good alternatives. I suspect the reason we don't have many people talking about this is that not many projects have grown to this complexity, and when they do these problems will become apparent.

The real weak point is Go's type system--it's genuinely terrible, because the features that came standard in other modern statically-typed languages decades before Go was invented were bolted onto Go after the fact. Gophers initially claimed they didn't need generics for a few years. As a result you've got conflicting systems developed before `go generate` (using casts), after `go generate` but before generics (using go generate), and after generics (using generics). It's telling that you seemingly reject generics ("clever library using elegant abstraction and generics") even though go has them now.

Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).

[1] https://en.wikipedia.org/wiki/Chunking_(psychology)


Fwiw I have worked on multiple many hundred of thousands lines of code projects in multiple languages, several of which we rewrote from python, perl, php, and ruby to go where I first maintained the existing code and then worked on a rewrite. I've also walked into existing large Go projects and worked in elixir and some limited js.

In each and every case except one (some contractors did something really odd in trying to write go like java or ruby, can't recall, but the code was terribad), the go version was both more performant and easier to maintain. This is measured by counting bugs, development velocity, and mean time to remediation.


Meaningful comparisons between programming languages are difficult.

I've done rewrites of Python programs in Python, and the rewrites were more performant and easier to maintain.

My point is, is it the language? Or is it the fact that when you rewrite something, you understand which parts of the program are difficult, you know the gotchas, and you eliminate all the misfeatures you thought you needed the first time but didn't. In short, I suspect the benefit of learning from your mistakes is probably far more valuable than switching languages in either direction.


Hands down, the language made the projects easier to maintain. I have also rewritten from php to python, python to python, and perl to perl, many greenfield projects in each, etc.

Why did the language matter? Largely, static typing, concurrency ergonomics, fast compilation, and easy to ship/run single binaries. The fact it also saved 10-20x in server costs was a great bonus.

Better design can absolutely improve a project and make it easier to maintain and more performant. And bad code can be written in any language. I am more and more convinced that dynamically typed code doesn't have a place in medium to large organizations where a codebase no longer fits in one person's head.


> I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).

Originally Haskell was designed to be a language providing

> faster communication of new ideas, a stable foundation for real applications development, and a vehicle through which others would be encouraged to use functional languages

https://www.microsoft.com/en-us/research/wp-content/uploads/...

That doesn't necessarily imply general purpose of course, but today pretty much any language suitable for "real applications development" would be considered as "general purpose", I think. In any case, regardless of what Haskell was originally intended to be, I would say it is a general purpose language (and in fact the best general purpose language).


> Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).

I'll be that guy. We like our stuff in Haskell. Watching the rest of the industry move forward is like a reverse trip around the Monopoly board of computer science progress.

When I joined up, everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.

And whenever features progress in the forward direction again, there are two issues - firstly, they sometimes come out kind of crap. Secondly, arguments for or against this crapness tend to take up all the oxygen that could have facilitated discussions around goodness.

Exceptions or return values? Nope, monadic error handling, any day of the week.

Terse dynamic code, or bloated static code? Nope, terse code with full type inference.

Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.

First-order generics or no? Higher-kinded parametric polymorphism.

Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?

There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.


The issue is not language semantic. The issue is readability. Having the best feature set in the world is useless if the code produced by others is a pain to decipher.

Haskell disqualified itself for general programming when its community decided that point-free was desirable despite the style being impossible to read and custom operators were a good thing. I personally hate every Haskell code base I have ever seen despite being relatively fluent in the language (an issue Ocaml never had amusingly mostly because its community used to be very pragmatic).


So when you said

> I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language

did you mean "sometimes some Haskellers use point-free style and it's impossible to read"? If not, could you explain what you did mean?


The person you are responding to didn't say that, I did.

The abstractions I'm pointing at are cases where mutation or side effects are the desired result of execution. Ultimately this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break". Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.

The other thing is laziness which makes it hard to reason about performance. Note that I didn't say it's hard to reason about execution order--I think they did a good job of handling that.

Don't get me wrong, Haskell's dogmatic commitment to functional purity has led to the discovery of some powerful abstractions that have trickled out into other languages. That's extremely valuable work.


> The person you are responding to didn't say that, I did.

Ah, thanks, I got confused.

> Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.

In my experience it's completely the opposite, actually. I can only really write code that correctly handles a ton of weird edge cases in Haskell. It seems that many people think that Haskell is supposedly a language for "making easy code elegant". The benefit of Haskell is not elegance or style (although it can be elegant). The benefit is that it makes gnarly problems tractable! My experience trying to handle a ton of weird edge cases in Python is that it's really difficult, firstly because you can't model many edge cases properly at all because it doesn't have sum types and secondly because it doesn't have type checking. (As I understand it they have added both of these features since I last used Python, but I suspect they're not as ergonomic as in Haskell.)

> this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break"

Actually, I would say not really. The largest number of monads you "have to" learn is one, that is, the monad of the effect system you choose. Naturally, not every Haskell codebase uses an effect system, and those codebases can therefore be more complex in that regard, but that's not a problem with Haskell per se, it's an emergent property of how people use Haskell, and therefore doesn't say anything at all about whether Haskell is usable as a general purpose language. For example, consider the following Python code.

    def main():
      for i in range(1, 101):
          if i > 4:
              break
    
          print(i)
You can write it in Bluefin[1], my Haskell effect system as follows.

    main = runEff $ \ioe ->
      withJump $ \break -> do
        for_ [1..100] $ \i -> do
          when (i > 4) $ do
            jumpTo break
    
          effIO ioe (print i)
Granted, that is noisier than the Python, despite being a direct translation. However, the noise is a roughly O(1) cost so in larger code samples it would be less noticeable. The benefit of Haskell here over Python is

1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit

2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)

3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)

Regarding laziness and performance, that is a resolved issue. I have an article that explains that: http://h2.jaguarpaw.co.uk/posts/make-invalid-laziness-unrepr...

I'm curious what you think of Haskell suitability for general purpose programming in light of my response.

[1] https://hackage.haskell.org/package/bluefin-0.0.6.1/docs/Blu...


> Granted, that is noisier than the Python, despite being a direct translation.

My complaint isn't the noise. My complaint is: can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.

> 1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit

Is this an upside? It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.

> 2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)

Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?

And lots of languages now have labeled breaks.

Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)

> 3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)

What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?

In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.

> Regarding laziness and performance, that is a resolved issue. I have an article that explains that: http://h2.jaguarpaw.co.uk/posts/make-invalid-laziness-unrepr...

The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.

The fact is, Haskell does the wrong thing by default here, and even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that. Laziness still gets advertised up front as being one of the awesome things about Haskell, and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.


Also, you need a 3rd party library to handle something this basic...


Haskell has a long history of a small base library with a lot of essential functionality being provided as third party libraries, including mtl and transformers (monad transformers), vector (an array library). Even time (a time library) and text (Unicode strings) are third party, by some definitions (they aren't part of the base library but they are shipped with the compiler).

Some people think that's fine, some people think it's annoying. I personally think it's great because it allows for a great deal of separate evolution.


Thanks for your detailed reply! As a reminder, my whole purpose in this thread is to try to understand your comment that Haskell is

> impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language)

From my point of view Haskell is a general purpose language, and an excellent one (and in fact, the best one!). I'm not actually sure whether you're saying that

1. Haskell is a general purpose language, but it's impractical

2. Haskell is a general purpose language, but it's too impractical to be used as one (for some (large?) subset of programmers)

2. Haskell is not a general purpose language because it's too impractical

(I agree with 1, with the caveat I don't think it's significantly less practical than other general purpose languages, including Python. It's just impractical in different ways!)

That out of the way, I'll address your points.

> can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.

Yes, I can explain what it does! `jumpTo break` throws an exception which returns execution to `withJump`, and the program continues from there. Do you think explaining this to a beginner is more difficult than explaining that `break` exits the loop and the program continues from there?

> It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.

I can! And not just me. It's caused a lot of problems for a lot of users over the years: https://stackoverflow.com/questions/54288926/python-loops-an...

> Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?

Yes, I would say that setting flags to exit loops causes additional complexity and difficulty in understanding.

> And lots of languages now have labeled breaks.

I'm finding this hard to reconcile with your comment above. Why do they have labelled breaks if it's good enough to set flags to exit loops?

> Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)

Sure, if you like, but remember that my purpose is not to argue that Haskell is the best general purpose language (even though I think it is) only that it is a general purpose language. It has at least the general purpose features of other general purpose languages. That seems good enough for me.

> What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?

    def foo(x):
        bar(x + 1)
Does foo print anything to the terminal, wipe the database or launch the missiles? I don't know. I can't see what possible effects bar has.

    foo1 :: e :> es => IOE e -> Int -> Eff es ()
    foo1 ioe x = do
        bar1 ioe x

    foo2 :: e :> es => IOE e -> Int -> Eff es ()
    foo2 ioe x = do
        bar2 (x + 1)
I know that foo2 does not print anything to the terminal, wipe the database or launch the missiles! It doesn't give bar access to any effect handles, so it can't. foo1 might though! It does pass an I/O effect handle to bar1, so in principle it might do anything!

But again, although I think this makes Haskell a better language, that's just my personal opinion. I don't expect anyone else to agree, necessarily. But if someone else says Haskell is not general purpose I would like them to explain how it can not be, even though it has all these useful features.

> In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.

Me too, and the results they see are better than if I wrote code in another language, because Haskell is the language that allows me to most clearly see what results will be produced by my code.

> The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.

Hmm, do you use that line of reasoning for everything? For example, if there were a blogpost about namedtuple in Python[1] would you say "the fact that you need a blog post to tell people how to use namedtuple exemplifies that it is not a solved problem"? I really can't understand why explaining how to do something exemplifies that that thing is not solved. To my mind it's the exact opposite!

Indeed in Python laziness is not turned on, so instead if you want to be lazy you need blog posts to tell people how to turn it on! For example [2].

> The fact is, Haskell does the wrong thing by default here

I agree. My personal take is that data should be by default strict and functions should be by default lazy. I think that would have the best ergonomic properties. But there is no such language. Does that mean that every language is not general purpose?

> even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that.

Ah, but that's the beauty of the solution. It doesn't matter whether others wrote "lazy code". If you define your data types correctly then your data types are free of space leaks. It doesn't matter what anyone else writes. Of course, other libraries may use laziness internally in a bad way. I've fixed my fair share of such issues, such as [3]. But other libraries can always be written in a bad way. In Python a badly written library may cause an exception and bring down your worker thread when you weren't expecting it, for example. That's a weakness of Python, but it doesn't mean it's not a general purpose language!

> Laziness still gets advertised up front as being one of the awesome things about Haskell

Hmm, maybe. "Pure and functional" is the main thing that people emphasizes as awesome. You yourself earlier brought up SPJ saying that the next Haskell will be strict, so we both know that Haskellers know that laziness is a double edged sword. I'm trying to point out that even though one edge of the sword of laziness points back at you it's not actually too hard to manage and having to manage it doesn't make Haskell not a general purpose language.

> and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.

Hmm, maybe. I don't think people deliberately write lazy (or strict) code. They just write code. The code will typically happen to have a lot of laziness, because Haskell is lazy by default. I think that we agree that that laziness is not the best default, but we disagree about how difficult it is to work around that issue.

I would be interested to hear whether you have more specific ideas you can share about why Haskell is not a general purpose language, in light of my responses.

[1] https://blog.teclado.com/pythons-namedtuples/

[2] https://www.tothenew.com/blog/exploring-pythons-itertools-mo...

[3] https://github.com/mrkkrp/megaparsec/issues/486


> everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.

That paragraph made me chuckle, thanks.

> picture yourself happily doing your day-to-day coding without the existence of nulls

I've seen it, with Elm and Rust, and now I hate go's "zero values" too because it makes everything a bit more like PHP aka failing forward.


> Exceptions or return values? Nope, monadic error handling, any day of the week.

Ehhhh...

The thing is, there are a lot of cases where I can look at the code and I know the error won't happen because I'm not calling it that way. Sure, sometimes I get it wrong, but the fact is that not every application needs a level of reliability that's worth the effort of having to reason around error handling semi-explicitly to persuade the compiler that the error is handled, when it really doesn't need to be.

> Terse dynamic code, or bloated static code? Nope, terse code with full type inference.

I think you're significantly overselling this. Type inference is great, but you can't pretend that you don't have to implicitly work around types sometimes, resulting in some structures that would be terser in a dynamic language. Type inference is extremely valuable and I really don't want to use static types without it, but there are some tradeoffs between dynamic types and static types with type inference that you're not acknowledging. I think for a lot of problems Haskell wins here, but a lot of problems it doesn't.

One area I'm exploring with the interpreter I'm writing is strong, dynamic typing. The hypothesis is that the strictness of the types matters more than when they are checked (compile time or runtime). Python and Ruby I think both had this idea, but didn't take it far enough in my opinion, making compromises where they didn't need to.

> Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.

100% with you on this.

> First-order generics or no? Higher-kinded parametric polymorphism.

Ehhh, I feel like this is getting overly excited about something that simply isn't all that useful. I'm sure that there's some problem out there where higher-kinded types matter, or maybe I just lack vision, but I'm just not coming across any problems in my career where this feels like the solution.

I feel like there's a caveat I want to add to this but I'm not able to put my finger on it at the moment, so bear with me if I revise this statement a bit later. :)

> Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?

Ehh, the languages I like are all using transactional message-passing anyway, and I'm pretty sure Haskell didn't invent this.

> There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.

I mean sure, I'm 100% with you on Option types, as I said. But, imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.


> imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.

In Haskell that’s Debug.Trace.traceShow. You can use it in pure code too.


Spot on ! same feeling here !


> Go is basically anti-Haskell. It forces you to be less clever and that's great.

This is a bit of tangent, but I think it's worth pointing out the value in Haskell (at least as far as I see it) is not that it allows you to write "clever" code, but that it allows you to define precise interfaces. I suspect some people like Haskell because they can be "clever", and I suspect to be able to define precise interfaces you have to allow some degree of cleverness (because you need things like higher order functions and higher kinded types), but cleverness is not, in itself, the value of Haskell.


And yet they had to reach out to Haskell folks to fix their generics story.


For loops? Dang that’s some clever syntax you have there. Personally I prefer a big standard while loop. /s


That gives me the impression you've never written in go. Certainly not a while loop.


Nah, bog standard goto


I would rather go had real enums, and I would _prefer_ if there were sum types.

I agree it's more verbose, but I don't find that that verbosity really bothers me most of the time. Is

    res= [x for x in foo if "banned" in x]
really actually more readable than

    var result []string
    for _, x := range foo {
        if strings.Contains(x, "banned") {
            result = append(result, x)
        }
    }
? I know it's 6 lines vs 1, but in practice I look at that and it's just as readable.

I think go's attitude here (for the most part) of "there's likely only one dumb, obvious way to do this" is a benefit, and it makes me think more about the higher level rather than what's the most go-esque way to do this.


I agree that list comprehensions aren't any easier to read. A proper streaming interface on the other hand lets you easily follow how the data is transformed:

    foo
      .stream()
      .filter(x -> x.contains("banned"))
      .collect(Collectors.toList());
As an aside, Go conflating lists and views irks me, in part due to what weird semantics it gives to append (e.g. if you have two disjunct slices and append an element to one slice, that might modify the other slice).


The problem with this is that people again get way to clever with it. it's not just stream -> filter -> collection, there will be a bunch of groupbys in there etc. If you have to debug or extend the functionality it's a nightmare to understand what all the intermediate representations are


Inspecting intermediate representations is trivial by just collecting them into a variable?

More complicated scenarios are exactly what streaming APIs excel at, by treating each step as a single transformation of data. Lack of a proper group by function is one of my classic examples for how Go forces you into an imperative style that's harder to understand at a glance.


Elixir gives you tooling to inline inspect


You could write your own syntax sugar functions with signatures like...

    func copyArrStringShallow(x []string) []string { return x }
    // strings are immutable in go, but for []byte etc also deep copy that.
    func copyArrStringDeep(x []string) []string {
      ret := make([]string, 0, len(x))
      copy(ret, x)
      return ret
    }


I argue:

- The list comprehension is ever slightly more readable. (Small Positive)

- It is a bit faster to write the code for the Python variant. (Small Positive)

So this would be a small positive when using Python.

Furthermore, I believe there is this "small positive" trade-off on nearly every aspect of Python, when compared to Go. It makes me wonder why someone might prefer Go to Python in almost any context.

Some common critiques of Python might be:

- Performance in number crunching

- Performance in concurrency

- Development issues in managing a large code base

I believe the ecosystem is sufficiently developed that Numpy/Numba JIT can address nearly all number crunching performance, Uvicorn w/ workers addresses concurrency in a web serving context, ThreadPool/ProcessPool addresses concurrency elsewhere, and type hints are 90% of what you need for type safety. So where does the perceived legitimacy of these critiques come from? I don't know.


> The list comprehension is ever slightly more readable.

I disagree - it's terse to the point of being hard to parse, particularly when you get smart ones like:

    [x for x in t if x not in s]
> It is a bit faster to write the code for the Python variant.

Code should be written to be read. Saving a few keystrokes vs time spent figuring out the `not in in not with` dance gives the the edge to Golang here. It's "high context"

> - Performance in number crunching > - Performance in concurrency

And "performance in all other areas". See the thread last week about massive speedups in function calls in python where it was still 5-10x slower than go.

> So where does the perceived legitimacy of these critiques come from? I don't know.

It's pretty hard to discuss it when you've declared that performance isn't a problem and that type annotations solve the scalability of development problem.


Luckily, we are writing Python, not Go, so we can use variable names with more than one letter:

    [word for word in sentence if word not in bannedWords]
Suddenly, nothing is hard to parse.


I still believe Python comprehensions have confusing structure and in real code I've seen it's 10x worse with 5+ expressions packed in a single line. I much prefer a Nim's style of list comprehensions:

    let a = collect:
      for word in wordList:
        if word notin bannedWords:
          word
    let b = collect(for x in list: x)
It's still very terse, but, more importanly, it's the same syntax as a regular `for loop`. It has the structure, where in Python complex comprehensions look like a "keyword soup".


I do think people should tend to be more verbose with variable names, in general.

And since it's python, use snake case to add faux white space to help the eyes parse the statement.


This is my (current) favorite list comprehension: https://github.com/huggingface/datasets/blob/871eabc7b23c27d... Someone was feeling awfully clever that day. (Not that I'm not occasionally guilty myself.)


Rust:

    let results = foo
        .into_iter()
        .filter(|s| s.contains("banned"))
        .collect::<Vec<&str>>();
C#:

    var results = foo
        .Where(s => s.Contains("banned"))
        .ToArray();
Convenient, easy to understand and fast.


I think Rusts terseness shows here - I think C#'s approach is the best. Also, if you don't use `.ToArray()`, you still have an IEnumerable which is very usable.


Also true in Rust. You don't have to use `collect` at the end and you still get an iterator.

    let results = foo
            .into_iter()
            .filter(|s| s.contains("banned"))


And I think Kotlin would just be

    val results = foo.filter { "banned" in it}
Though I'm not sure I'm a fan of it eagerly finishing with a List. If you chained several operations you could accidentally be wasting a load of allocations (with the solution being to start with foo.asSequence() instead)


Of course. This was just to illustrate the point, whether to snapshot/collect a sequence or not is another matter entirely. It just goes to show that idiomatic and fast* iterator expressions is something that modern general-purpose PLs are ought to have.

* I know little about performance characteristics of Kotlin but assume it is subject to behaviors similar to Java as run by OpenJDK/GraalVM. Perhaps similar caveats as with F#?


Unfortunately Kotlin fails very, very hard on the "iteration speed" side of things. The compilation speed is so unbelieveably slow, and it suffers very very much from the "JVM Startup time" problem.

If it were an order of magnitude faster to compile I'd consider it.


IMO the Python version provides more information about the final state of res than the Go version at a glance: It's a list, len(res) <= len(foo), every element is an element of foo and they appear in the same order.

The Go version may contain some typo or other that simply sets result to the empty list.

I'd argue that having idioms like list comprehension allows you to skim code faster, because you can skip over them (ah! we're simply shrinking foo a bit) instead of having to make sure that the loop doesn't do anything but append.

This even goes both ways: do-notation in Haskell can make code harder to skim because you have to consider what monad you're currently in as it reassigns the meaning of "<-" (I say this as a big Haskell fan).

At the same time I've seen too much Go code that does err != nil style checking and misses a break or return statement afterwards :(


”Is <python code> really actually more readable than <go code>?”

I mean, I mostly work in Python, but, yes absolutely.

There’s something to be said for locality of behavior. If I can see 6x as many lines at once that’s worth a lot in my experience.

This becomes blatantly apparent in other languages where we need 10 files open just to understand the code path of some inheritance hierarchy. And it’s not nearly that extreme in go, but the principle is the same.

But there is something to be said for the one way to do it, and not overthinking it.


Filtering a container by a predicate is 50 year old technology and a very common thing. It's unbelievable that a "modern" language has no shorter or clearer idiom than that convoluted boilerplate filled Ministry of Silly Walks blob. Python had filter() and then got list comprehensions from Haskell list comprehensions. PowerShell has Where-Object taking from C# LINQ .Where() which takes from SQL's WHERE. Prolog has include/3 which goes back to sublist(Test, Longer, Shorter) and LISP-Machine LISP had sublist in the 1970s[1]. APL has single character / compressing an array from a bitmask result of a Boolean test in the original APL/360 in 1968 and it was described in the book in 1962[2].

Brian Kernighan gave a talk on the readability of code and not getting too clever[3] "Elements of Programming Style" where he talks about languages having one way to write one thing so that you can spot mistakes easily. I am aware he's one of the Go designers and I will mention that again in a moment. In Python the non-list-comprehension version might be:

    result = []
    for x in foo:
        if "banned" in x:
            result.append(x)
Which is still clearer than the Go, simply by having less going on. I usually argue that "readable" merely means "familiar" and Python is familiar to me and Go isn't. Your Go code makes me wonder:

- "var" in C# does type inference. You declare []string but don't declare types for _ x or the tuple _,x what's up with the partial type inference? What is "var" adding to the code over classic "int x" style variable declarations?

- What is "range" doing? From _ I guess it does enumeration and that's a throwaway for the index (if so it has an unclear name). If you have to enumerate and throw away the index into _ because there isn't another way to iterate then why does keyword "range" need to exist? Conversely if there are other ways to iterate and the keyword "range" is optional, why do it this way with a variable taking up visual space only to be thrown away? (The code equivalent of saying "for some reason" and drawing the audience's attention to ... nothing). And why is range a keyword instead of a function call, e.g. Python's "for (idx, elem) in enumerate(foo):" ?

- Why is there assignment using both := and = ?

- Why string.Contains() with a module name and capital letter but append() with no module and all lowercase? Is there an unwritten import for "strings"?

- The order of arguments to Contains() and append(); Prof. Kernighan calls this out at 10m10s in that talk. C# has named arguments[3] or the object.Method() style haystack.Contains(needle) is clear, but the Go code has neither. It would be Bad Prolog(tm) to make a predicate Contains(String1, StringA) because it's not clear which way round it works, but "string_a in string_1" is clear in Python because it reads like English. AFAIK a compiler/type system can't help here as both arguments are strings, so it's more important that the style helps a reader notice if the arguments are accidentally the wrong way around, and this doesn't. We could ask the same about the _, x as well.

- "result =" looks like it's overwriting the variable each time through the loop (which would be a common beginner mistake in other languages). If append is not modifying in place and instead returning a new array, is that a terrible performance hit like it is in C#? Python list comprehensions are explicitly making a completely new list, but if the Go code said "result2 = append(result, x)" is it valid to keep variable "result" from before the append, or invalid, or a subtle bug? The reader has to think of it, and know the answer, the Python code avoids that completely.

- And of course the forever curly brace / indent question - are the closing } really ending the indented block that they look like they are ending judging from the dedent? I hear Go has mandatory formatting which might make that a non-issue, but this specific Python line has no block indentation at all so it's less than a non-issue.

- The Python has five symbols =[""] to mentally parse, pair and deal with, compared to twenty []_,:={.(,""){=(,)}} in the Go.

Step back from those details to ask "what is the code trying to achieve, and is this code achieving the goal?" in the Go the core test ".Contains()" is hiding in the middle of the six lines. I'm not going to say you need to be able to read a language without learning it, but in the long-form Python what is there even to wonder about? B. Kernighan calls that out about 12:50 in the talk "you get the sense the person who is writing the code doesn't really understand the language properly". You say code is meant to be read more than written, and I claim it's more likely that a reader won't understand details, than will. Which means code with fewer details and which "just works" the way it looks like (Pythonic) is more readable. As BWK said in the talk "It's not that you can't understand [the Go], it's that you have to work at it, and you shouldn't have to work at it for a task this simple".

[1] https://old.reddit.com/r/ProgrammingLanguages/comments/14tvu...

[2] https://keiapl.org/archive/APL360_UsersMan_Aug1968.pdf 3.38 or PDF page 94, dating back to the original A Programming Language (APL) book in 1962, source https://aplwiki.com/wiki/Replicate#History

[3] https://www.youtube.com/watch?v=8SUkrR7ZfTA

[4] https://learn.microsoft.com/en-us/dotnet/csharp/programming-...


>Brian Kernighan gave a talk on the readability of code ... I am aware he's one of the Go designers

I don't think so.

Not according to Wikipedia at least. First sentence:

>Go is a statically typed, compiled high-level programming language designed at Google[12] by Robert Griesemer, Rob Pike, and Ken Thompson.[4]

https://en.m.wikipedia.org/wiki/Go_(programming_language)

Maybe you confused Brian Kernighan with Ken Thompson.

I think I have done that in the past myself. He (Ken Thompson) also worked a lot on Unix; maybe that is why the confusion.

https://en.m.wikipedia.org/wiki/Unix

Or maybe the confusion is because Kernighan was a co-author of The Go Programming Language book.


Not confusion with Ken Thompson; probably the book.


The new lines were eaten so it does make the python version more readable but I agree with you.


Thanks - updated!


Go has real enums. All an enum does is count.

You're probably thinking of value constraints? Or, perhaps, exhaustive case analysis? Go certainly lacks those future.

And, indeed, they sound like nice features, but, to be fair, not well supported in any popular programming language. At best we get some half-assery. Which always questions if the popular languages are popular because of their lacking type systems?


This topic has been beaten to death, and being pedantic about the definition of an enum to say "actually go has them" isn't helpful. There are dozens of articles from the last decade which explain the problems. Those problems don't exist in plenty of programming languages.

No language is perfect, but go's particular set of bugbears is a good tradeoff


> being pedantic about the definition of an enum to say "actually go has them" isn't helpful.

Incorrect. The term "real enums", where used to imply that enums are something other than the basic element of the same name, encompasses a number of distinct features that are completely independent of each other. In order to meaningfully talk about "real enums", we need to break it down into the individual parts.

If you're just trolling in bad faith, sure, leave it at "real enums" to prevent any discussion from taking place, but the rules of the site make it pretty clear that is not the intent of Hacker News.

> Those problems don't exist in plenty of programming languages.

Plenty, but none popular. Of the popular programming languages, Typescript seems to try the hardest, but even then just barely shows some semblance of supporting those features – still only providing support in some very narrow cases. The problems these features intend to solve are still very much present in general.


Words can have more than one meaning. As far as I know, no one voted you to be arbiter of all terms and their One True Correct™ meaning. It's pretty clear what the previous poster intended to say.


Quite clear, in fact, which is why we are able – in theory – to have a discussion about all the distinct features at play. If it weren't clear, we wouldn't be able to go there.

I say in theory, because as demonstrated by the sibling comment, there apparently isn't much programming expertise around here. At least it was fascinating to see how a mind can go haywire when there isn't a canned response to give.


Dude. Everyone knew what "real enums" meant including you. Please stop.

And yes popular languages do have real type safe enums. C++, Typescript, Rust, god even Python.


> And yes popular languages do have real type safe enums.

Right, as we already established, but which is only incredibly narrow support within the features in question. While you can find safety within the scope of enums and enums alone, it blows up as soon as you want the same safety applied to anything else. No popular language comes close to completing these features, doing it half-assed at most. We went over this several times now. Typescript goes a little further than most popular languages, but even it doesn't go very far, leaving all kinds of gaps where the type system does not provide the aforementioned safety.

You clearly know how to read, by your own admission, so why are you flailing around like one of those wacky blow up men at the used car lot? Are you just not familiar with programming?


> Are you just not familiar with programming?

I am very familiar with programming. The only things you've said so far have been attempts to redefine well-understood terms and now ad hominem and incoherent rambling.

If you don't have anything useful to say...


There is no redefinition. You know that because you literally repeated what I said in your comment, so clearly you are accepting of what was said.

Seemingly the almost verbatim repetition, yet presented in a combative form is because you don't have familiarity with programming, and thus can only respond with canned responses that you picked up elsewhere. If you do have familiarity, let's apply it.

So, do you actually want to discuss the topic at hand, or are you here just for a silly fake internet argument?


I agree with this take. I find Rust a more exciting language from a personal project perspective—and it's what I go with even when it doesn't make 100% sense.

Go is fine, though, and works well in a team environment. It's just clunky, but clunky in a productive way.


Rust-script + cmd_lib = scripting using shell, and converting it to Rust when I want performance.

I know it might sound dreadful but it works well for me, especially without an internet connection.


Damn, everything new really is old again. Everyone said the same thing about Java. Yet it still works and gets the job done. Go does as well. I'd rather poke my eyes out with a nail than use Python.


> I'd rather poke my eyes out with a nail than use Python.

Glad I'm not the only one. Every time I'm forced to use Python I cringe. What version disaster and I'm going to run into today? The 2->3 transition happened a long time ago at this point and I still run into lingering effects. Also, for short 'script' like things Python doesn't feel any quicker or easier to write. Go is my Goto (hehe) for short scripts.

And while I'm ranting, I'll also say that modern/latest version Java is also really nice.


> The 2->3 transition happened a long time ago at this point and I still run into lingering effects.

Interesting. I read about this a lot and I feel like it's a gimmick at this point. What lingering effects of the Py2->Py3 transition do you run into?


Do you write Go for fun as hobby, or do you agree with the OP?


I don't program as a hobby or for fun anymore. It's my job and I do it, then I go home and do non-programming there. I use Go and Java and occasionally Python if I have to. I do agree with the OP though.


I feel like autocomplete has reduced the problem of verbosity for all languages. And if I am writing something that is going to have to be supported a lot I want something that is very explicit and easy to read. For me that is the "verbose" languages.


I feel the opposite way about this, and find this kind of verbosity reduces the signal to noise of the intention behind the code


This is exactly what I was going to say... ever since I started using Copilot, verbosity bothers me a lot less. It isn't painful when I don't have to type/copy/etc most of it.


For personal stuff I use use groovy.

Has the jvm and jdk and all the associated jars /library ecosystem?

Has tons of really good stuff from python/Ruby

Typing is optional so you can get full speed, static typing, but you can go dynamic when you need it.

That said, it's basically a deadend language career wise.


> lack of proper enums, lack of sum types, no null safety etc

I miss the same for work.

I wonder how long it will take for Go to be the new PHP. Zealotism is through the roof. I read here people say "lack of proper enums, lack of sum types, no null safety" is a good thing. I read here people say "you can do anything with Go". It's simply not true, and very uninformed.

I wish everyone a good experience with their lang of choice.


I'm in the same boat. I believe it's a matter of taste mostly. We prefer expressiveness and power, other prefer simplicity of semantics and verbose code


So you're ok with Python, which doesn't have sum types or null safety, 2 out of 3 things you say are missing in Go.

In fact, Python didn't have any compile-type type until recently and compared to Go is slow and bloated.

So maybe the issue is not Go's lacking some features but something entirely different.


class A: def __init__(): pass def f(self): pass

def g(a: A | None): a.f()

(edit: I don't know how to format code on HN)

If I activate type-checking in VS Code this will highlight an error, although the python interpreter will indeed try to run it without compile time error

As I said, for my side projects this is enough for me to model my problems properly without having to resort to multiple hacks

And I took Python as an example, I also enjoy using Ocaml and Rust


> I don't know how to format code on HN

two spaces: https://news.ycombinator.com/formatdoc


Python is dynamically typed, so everything is a sum type.


Mypy supports those for like 3-4 years already


That's why Typescript is my hammer, unless I need a jack hammer, which is Rust.

With Deno and Typescript I get an even more versatile toolbox than with Go. And what's even more important to me, Typescript is safer and more ergonomic than Go, but slightly slower. Rust is safer, more ergonomic and faster than Go, but much harder to learn.

Especially strict (which is a must) Typescript is underrated. Compared to Go, we get:

- null safety

- widely supported generics

- discriminated unions with either manually (some lines of code) or es-lint exhaustiveness checks

- much safer concurrency, as all Typescript code runs single-threaded. You need web workers, which are not as safe as Rust for concurrency, but much safer than Go.

- collection / iterator methods

So far, I see only few downside. I'd be happy if people could provide more. Currently I'm scratching my head why I didn't fall in love with Typescript (for backend and CLI) earlier. Maybe I haven't seen the dark sides yet. So, some points where Go is stronger than Typescript:

- Go is much more efficient in terms of size and memory usage

- Go's GC is better than V8s

- Go is faster on CPU bound tasks

- Go has a greater std lib, however, Deno's std lib is pretty nice as well.

- The ecosystem is smaller, but has not as much trash as NPM. The dependency trees with NPM are usually large. By the way, Rust has this problem as well, less than Typescript but more than Go. Still, many mainstream NPM packages are safe and rock solid.

What else could we say against Typescript (and preferably Deno or Bun)? I'm really eager to hear people having ditched it an why.


- Typescript doesn't protect you at all against external data.

- A Go program runs single-threaded, unless you explicitly tell it not too, so I don't quite see that as a plus for Typescript.

- Go is quite a bit faster than Typescript.

- I haven't had a versioning problem in Go, and I can still compile and run old code. Typescript still beats Python in that respect, but Go wins stability.

And I say this as someone who really likes Typescript. It's a blessing for frontend browser work.


> - A Go program runs single-threaded, unless you explicitly tell it not too, so I don't quite see that as a plus for Typescript.

I don't think this is a fair comparison. Almost all software needs some kind of concurrency, and multi-threading is the only way to do concurrency in Go. For example, the http server in Go's stdlib runs each request in its own goroutine, which I guess you technically asked for, but also can't be avoided.


That means you think it's a good idea that a JS/TS server handles everything on a single thread, which doesn't sound like an advantage.

BTW, you can also start multiple processes in go instead of go routines. There's probably a Worker-like library for it.


If you want TS-supported deserialization, just use runtime schema validation. I don't see the issue. It's dead-simple and is the way everyone does it.


Sure that's good advice, but it was a comparison of language features.


Amen. I was unreasonably opposed to TypeScript at first after lots of bad experiences with messy JavaScript when I was first learning development - but after immersing myself in it for a year or so at $NEW_JOB, I absolutely love it. The only ways in which I can see that Go outperforms TypeScript is in performance, but frankly I don't care and most developers shouldn't - unless you're writing truly low-level high-performance utilities, you should (within reason) bias for being able to write (and iterate) fast over being able to execute fast. You can always re-write in Rust if you find that performance is a bottleneck.


I have been using typescript as my primary language for years and since I am used to it I actually enjoy it (much more than I enjoyed Java or Python in the past). And depending on what you built I agree, especially for one off scripts or if you plan on hosting just one or two things somewhere. On the other hand I recently saw a video by webdev Cody that got me thinking, where he was comparing a dead simple api server in go and in bun hosted on railway and their memory usage was different by an order of magnitude in favour of go plus you can compile your program down to a few megabytes instead of bundling a ~100megabyte runtime. So if you have a couple dozen of side projects where you host servers/apis that difference can end up in a noticeable operating cost differential. He made a few other points such as throughput, the more complete standard library and tooling such as go fmt and that writing the equivalent server in go wasn’t really all that different from the typescript code. You’re right that rewriting is always an option but for all over the place folks like me the next 3 projects are half done before I even think about optimising my hosting bill. But as I said, strongly depends on what you build.

I guess there are no winners, just tradeoffs.

I wonder if one use case for llms in the future could be feeding a sizeable typescript/python codebase into it and then have it spit out an equivalent in idiomatic rust/zig/c. I am aware that transpilers and assembly script and static Hermes exist but what I mean is more of a result that produces a maintainable rewritten version making use of the idiomatic libraries and coding conventions of the target language.


Good points, thank you! I'm lucky enough that I don't really need to worry about operating cost because I self-host the few things I have built and am nowhere near maxing out my hardware, but your point is well made.

Although...

> writing the equivalent server in go wasn’t really all that different from the typescript code

I think _that's_ true - one strength I'll absolutely concede to Go is that spinning up a basic server is very concise with the stdlib. The problem is that implementing its _logic_ is then super-verbose.


Go is my favorite general-purpose tool (combination of language + standard library + 3rd party libraries + tooling + documentation). Everything just works out of the box and is simple to use and understand. No collection of dozens of external tools with fiddly configuration, a simple command to compile and simple deployment via a single executable with no additional setup required. No other language gives me such a simple and hassle-free all-around experience.

I certainly think other languages (Java, C#, Rust, JS/TS) have a lot of advantages over Go in some areas, but everything I've worked with so far has some other aspects that I absolutely hate.

POV from a (mainly) B2B fullstack SE


I've been coding in Go for over five years. I like Go, but I don't love it. It's never my first choice, although I don't advocate for rewrites just to move away from it.

The tooling is a mess. Go modules still feel like a 'first pass' implementation that never got finished. There's no consistency in formatting or imports (even though Go claims there is). Generics are a good step but are still very primitive (no generics on interfaces, no types as a first-class object).

It still feels very unfinished as an ecosystem. I hope it'll get better as the Go team mature things, like iterating on generics. But I can't see Go modules continuing without a fundamental rewrite.


Not sure what you mean by "no generics on interfaces"? https://go.dev/play/p/jGINeUt1JTE

Also echoing not sure what you mean by "no consistency in formatting or imports". It is increasingly difficult to use Go without gofmt getting run, since I have to imagine fewer and fewer people nowadays are using an editor that has neither custom support for their language nor LSP integration, and integration with gopls automatically runs goimports on save.


> Not sure what you mean by "no generics on interfaces"?

I didn't word this very well. You can have a generic interface, and functions on that interface can refer to generic types. But you can't have a generic method on an interface that uses a different generic type. For example you can't have:

``` type YieldThing[T any] interface { Yield() T DoOperation[U any](U) } ```


I figured based on your comment you meant something by it.

In which case I'd point out that it goes beyond interfaces and Go just doesn't permit that in general, for those who are playing along. All generics are always fully instantiated. You can have a

    type Holder[T any, U any] struct {
        Val1 T
        Val2 U
    }
but you can never have anything like a variable of type Holder[int], with a type-currying effect or something where the U bit is still unbound, or a bare variable of type "Holder" without any further parameters. All types within the language are always fully instantiated after compile.


Yeah I get it. I've found workarounds in the past. But it's always involved some friction.

In general, Go's generics are a huge step forward though. So I'm not that annoyed about it.


There are good reasons for not allowing generic methods:

https://go.googlesource.com/proposal/+/refs/heads/master/des...


Realistically the reasons are this:

> So while parameterized methods seem clearly useful at first glance, we would have to decide what they mean and how to implement that.

They just didn't want to decide what they mean or how to implement them.


Right, but that's because it is not obvious what they should mean or how to implement them. In particular, it seems that intolerable complications to the linker would be required. Contrast that with, say, a proposal to add a more concise syntax to Go for anonymous functions. Everyone can see exactly what that would mean and how it would be implemented.

I'm not saying that Go should never add generic methods to the language. But it's at least reasonable to hold off from doing so until these issues are clarified.

There is a concise explanation of the central problem in a comment in the issue thread:

> The crux of the problem is that with a parametric method, the generated code for the body depends on the type-parameter (e.g. addition on integers requires different CPU instructions than addition of floating point numbers). So for a generic method call to succeed, you either need to know all combinations of type-arguments when compiling the body (so that you can generate all bodies in advance), or you need to know all possible concrete types that a dynamic call could get dispatched to, when compiling the call (so you can generate the bodies on demand). The example from the design doc breaks that: The method call is via an interface, so it can dispatch to many concrete types and the concrete type is passed as an interface, so the compiler doesn't know the call sites.

Rust allows generic methods on traits but doesn't allow these traits to be instantiated as trait objects. Perhaps Go could do something similar. https://docs.rs/erased-generic-trait/latest/erased_generic_t.... But it's far from obvious to me that this would be a good idea for Go, and I'm glad the Go team haven't rushed into anything here.


What do you mean by no consistency in formatting? go fmt is a solid formatter that does its job


Go fmt is pretty good, but it's not ideal. My biggest gripe is imports. Go fmt will just sort imports alphabetically in lists that aren't separated by a blank line. Goimports will separate out core from 3rd party imports, unless you run it with the local flag then it'll add a third block of "local" imports.

But this spread means that it's not consistent across projects which style is preferred.

Some examples, based on cursory looking at big Go codebases: - Kubernetes, one of the biggest public-facing Go projects, uses the 3-block style https://github.com/kubernetes/kubernetes/blob/master/pkg/con... - TIDB uses 2-block style https://github.com/pingcap/tidb/blob/master/pkg/ddl/placemen... - MinIO uses 2-block https://github.com/minio/minio/blob/master/internal/grid/con...

In all of those cases if you make a change and just run 'go fmt' it very well could inject any new imports in the first block, which would be wrong and you wouldn't know until project CI picks it up.


It sounds to me like there are some people who don't follow the style, rather than there not being a consistent style.


Which of those options do you view as 'the style'? One block, two blocks (core library and others), or three (core library, 3rd party, same-source-tree).


The first rule of any style guide should be "adhere to the conventions of surrounding code." So the answer to your question is it depends on what the convention in the file (and surrounding files) has already chosen. If there is only one grouping and you're adding enough imports that you have to choose between making two or three, then I would say use two because that is what the canonical & normative style guide recommends [0], but if you're working in an organization that has chosen to use goimports or has alternative guidance in its org-specific style guide then go ahead and make it three groups.

[0]: https://google.github.io/styleguide/go/decisions#import-grou...


Try gofumpt, it’s my default.


After using rustfmt, I feel gofmt is not up to snuff. My main gripe with it is there are multiple formattings that are valid: there is not one true format. E.g. gofmt does not enforce line length which makes diverging styles possible, like for function declarations.


Funnily, 1.5 decades after Golang popularized formatters, in 2024 it is the only language that I work in that requires me to think about formatting. Mostly line length, but super annoying.


Not sure what you're comparing to, but Go modules are probably the best dependency management system in any language.


It does a lot well. For example it correctly pins dependencies by hash in go.sum. It's by no means the worst dependency management system I've ever used.

IMO the biggest miss with Go modules is conflating the identity of a dependency with how you get it. This means that renaming a repo not only breaks the module itself (as you self-import other modules in the same source tree using the full path), but all of your dependencies. I've seen repos be renamed from github.com/foo/proj to github.com/bar/proj as part of organisational reshuffles, and then there's a big warning somewhere that says "never make a github.com/foo/proj repo or it'll break GitHub's automatic forwarding for renamed repos, and you'll break every package that depends on us."

There are workarounds like using replace directives. But that makes an even worse situation where you can read a source file and assume a dependency is at github.com/foo/proj but actually it's elsewhere. But ultimately a real fix involves touching every single file that imports your dependency. If Go modules left the way of pulling a dependency in go.mod alone it wouldn't.

You should use a Go modules proxy to solve this, and a custom import path. But by the time most orgs realise they need this it's too late and adopting one would be a huge change. So you end up with a patchwork of import issues.


> But ultimately a real fix involves touching every single file that imports your dependency.

Why is that a problem?


For an internal-only dependency it's possible. But if you've got a lot of active branches, or long-lived feature branches, it'll create chaos in merge conflicts. Even worse if you've got multiple supported versions of a product on release branches (e.g., `main-v1.0`, `main-v1.1`, `main-v1.2`, and `main` itself for the yet-to-be-released `v1.3`) you either make backports awful (by only changing the import path on `main`) or have to change even more things (by changing the import path on the release branches too).

It's effectively impossible for pubic-facing dependencies. Imagine if https://github.com/sirupsen/logrus wanted to change their Go modules import path, for example to move to another git hosting provider. (Logrus is great by the way, I'm only 'picking' on it as a popular Go library that's used everywhere.) GitHub tells me that almost 200,000 Go projects depend on it (https://github.com/sirupsen/logrus/network/dependents), so all of them would need to change every source file they do logging in (probably most of them) in order to handle that.

GitHub seems like it's going to be eternal for now, but when the industry moves on in 10 years time every single Go project is going to break. This would be a problem for any source dependency management solution of course, it's not like any of the others are immune to this issue. But because Go has you encode the Git path in every source file you import it into, the level of change to fix it is an order of magnitude higher.


> GitHub seems like it's going to be eternal for now, but when the industry moves on in 10 years time every single Go project is going to break.

This isn't correct. The Go module proxy stores all of the module content so it's still available even if the original source is removed.


Between conflating imports and URLs, the weird go.mod syntax, and the nonsense that are GOPROXY, GOPRIVATE and such, yes, it’s a mess.

Honestly, aside from left-pad stuff, even npm is much better. I personally find cargo to be the best one I’ve tried yet. Feature flags, declarative, easy overrides, easy private registry, easy mixing and matching of public/private repos through git, etc. And like go it properly handles multiple versions of the same dep being used, but the compiler will help you when it happens.


Dealing with private repositories does suck, I'll give you that.


Exactly. What is he even comparing go modules with? npm and pip? The dependency disaster management which keeps breaking unless I create a new virtual environment for every project?


There are a lot of aspects of Go that I’m really not a fan of, but I’ve been writing an increasing amount of it over the last few months and on the whole I’m finding it really enjoyable—even though I’m not sure I could empirically explain why.

The tooling is heaven compared to other stuff we do a lot of at work (TS/JS of course being the main offender), and generally I find I spend less time thinking about things that aren’t directly related to the problem I’m trying to solve. Though, that might just be because I’m not an expert and simply don’t yet know about other things I could be thinking about!


I have been writing Go code for many years, the fact that I don’t need to think about things unrelated to the problem I am trying to solve is why I love Go. I definitely learned a lot about Go and I am thinking more about certain aspects than before, but usually only in the context of API design.


The "not having to think" is the best bit.

There are no fancy ways to do things, only the simple way.

If something feels like it's hard to do it's either the inefficient way to do it or the wrong way.


Life is barely long enough to get good at one thing so you should choose your thing wisely. That's wisdom I've held for quite some time.

Coincidentally, I chose Go as my language of choice as well. The factors that led me to that choice were many, but to highlight some:

- incredible standard library

- simple to read and write

- single static binary builds (assets included, like html/images, etc)

- don't need a container (my binary is the container)

- can be used anywhere (webdev, desktop apps, gamedev, embedded, etc)


I chose C# for the same reasons. It's probably easier to make C# unreadable than Go due to plethora of features, but it all comes down to how you discipline yourself about writing code.


My main beef with C# was the culture. I spent around 15 years building stuff in C# (since 1.0), and absolutely loved the language. But, man. The Microsoft shops were so full of Kool-Aid drinkers, and there was so much enterprisey crap that often came along with that. I miss C#, but I don't miss the culture.


"it all comes down to how you discipline yourself about writing code."

I don't think controlling one's own code is a problem. I'm more worried about the opportunities the language gives for Bob (my junior coworker) to slowly introduce all sorts of unclear cutting-edge language nonsense. Go is pretty good for this in my opinion.


That should be part of the team culture, similar to agreed upon coding conventions and naming standards. If there are no guidelines, Bob can screw up Go code just as bad as Perl.



Single file AOT is now officially supported by base dotnet.


bflat has somewhat different set of goals and offers "zerolib" and "uefi" runtime target flavours. Its author is also working on the official NativeAOT, which bflat builds on top of :)

While we're at it, it's impressive how much NativeAOT has improved within just 2 releases or so: https://migeel.sk/blog/2023/11/22/top-3-whole-program-optimi...

There are other niceties like dehydrated binary sections, metadata compression, linker that is deeply aware of the type system, etc. to keep the binary size scalable as you keep adding dependencies. I'm seeing even smaller sizes with .NET 9 preview.


The whole issue of readability is overstated and can be reduced to familiarity with syntax and idioms


I appreciate the Rust approach where Rust <-> C and Rust <-> Python get along really well together. They really thought about interoperability with existing infrastructure.

Java wanted everyone to rewrite everything in Java because that was easier than to interface with the existing libraries on the OS.


What game engines use Go?


I should say what 3D game engines use Go?*


Yes-- the one area where a pure-Go solution is weak happens to be 3d game development.

The obvious solution is to use C/C++ library bindings (raylib, opengl, etc). Yes-- you pay a tax for making calls into the C world and it becomes a lot harder if not impossible to do single binary builds. It also complicates native ports to other platforms like Nintendo, etc.

It's not impossible, just complicated. Luckily, I don't do 3d game development.


Not sure if you're being intentionally cheeky by pointing out a use case one wouldn't use Go for but will answer anyway.

A language with a GC (like go) typically isn't a good fit for a 3d engine. Almost all serious engines are C++, at least for the core code, for that reason.


> pointing out a use case one wouldn't use Go for but will answer anyway.

The thread is about using Go everywhere and I make games so I'm asking about it.

Unity and Godot have C# scripting which most game code will be in. Unreal has garbage collecting for game assets. GC is just another tradeoff to work with and certainly not a deal breaker for games.

I'm mostly curious about whether someone has done the work to integrate goroutines with a 3D engine in a performant way as typical techniques require a lot of synchronization with native threads that seem at odds with Go's design. But I'm curious to see it done.


I really like most aspects of Go, but as someone who writes a lot of numerical code, no operator overloading is a deal breaker. It's so hard to accept something like Add(Scale(s1, vec1), Scale(s2, vec2)) over s1 * vec1 + s2 * vec2. So I stick with Python and C++ for now.

Rust is really appealing as a C++ replacement, but it has too many rules to replace Python for one-off scripts. Still need to try Nim and Swift, I guess...


We switched from Python to Julia for numeric code. It's generally much faster (and easier to optimize), and has slightly better syntax for mathy code.

It's worth noting that Julia is very similar to go, despite superficial differences. They're both small languages with big libraries, use the same concurrency model, use a mark-and-sweep GC with an emphasis on making it easy for the programmer to reduce garbage/allocations, and both use structs + functions rather than classes.


Julia looks like a great language, but has the "time to first plot" issue been fixed yet? I don't want a REPL-centric workflow.


It's vastly faster than it was before (~5s compared to ~40s for the Plots package, and much less for most workflows) but it can never drop to 0, since it's compiling each function the first time you use it.

Though honestly if you don't want a REPL or notebook-centric workflow, I would probably recommend a different language.


In the same line of reasoning, Go also doesn't support getters/setters. You can't write v.a = 1 and expect it to call v.SetA(1). And that's a good thing. Cosmetically, because it avoids that all thickheads add a trivial get/set for each and every member. Functionally, because it can make code appear to do something it doesn't do, and this also goes for other operator overloading.

I've worked on C# code where db.open was a test to check for a database connection, but assigning to it would open or close the database. It should simply not be possible to hide that behind an assignment symbol. It's like having db.Query(...) delete the data.


Something like Elixir's function composition operator (->) would go a long way toward smoothing that out, but they rejected that as well.

I think the best we can do at the moment is runtime expression evaluation with something like this:

https://github.com/expr-lang/expr

or this: https://github.com/Knetic/govaluate


Elixir's function composition operator is |>


in my case, I prefer to see that the code is calling functions, so that I'm aware that they take extra resources such as stack space and it's easy to quickly jump into the functions to find what they are doing. Code that uses op overloading is hard to navigate and sometimes causes intense debugging pain. Simplicity always beats fancy features imo.


Operator overloading can be bad when abused for non-numeric types. But for numeric types, it is indispensable.

It's not hard to remember that `__add__` is using extra resources when I'm working with big matrices in NumPy.


Functions, especially functions like numerical operations, tend to get trivially inlined and consume no stack space.


Numerical programs seems like a great fit for DSLs, say Starlark[0] ;-) That's what many C++ libs basically achieve with their heavy use of overloading, tmp, etc

[0] - https://github.com/google/starlark-go or alternatively https://github.com/facebook/starlark-rust


But why would someone use a new DSL, vs an existing (general-purpose-ish) language that supports numerics (there are enough out there, e.g. R, Python, or the numerous paid ones like matlab, mathematica or IDL)? You have no ecosystem, plus now you're got to wrap 50+ years of numerical libraries...


Is this something Go intentionally didn't add?


Yes. Because "things are simpler without it." Presumably "simpler" from the point of view of the language implementer as opposed to the user of the language.


Operator overloading essentially makes code unreadable without deep diving past the interface boundary.

In Go, you can generally look at any snippet of code and know precisely what it does.


Not having operator overload making the code more readable is the same argument that was brought up with generics and it is still false.

Go does have operator overloading, for example + is overloaded for float.., int.. and even non numeric types like string.

And it does so for a very good reason: having operator overloading makes code much more readable when used correctly. It's just that the language designers didn't trust their users.

As long as you know the types of x and y you always know precisely what x + y does, same as you know what x.Add(y) does. There's no difference.


There is no difference to a .Add() function, that's true but even for strings you wouldn't have an Add function. It would be an Append() most likely which explains much more what is happening.

And verbosity helps, forcing users verbosity helps the general level of quality. Programmers could overestimate themselves and think they are doing it correctly. Looking back in my code from a year ago I see things I should have done differently. I like languages that avoid me making real dumb mistakes


How is a + b any less “know precisely what it does” than a.add(b)? It’s literally just different syntax for the same thing.



[flagged]


Gotta admire the craziness of sticking with a language and apparently even building a community of fans even after such an insanely panned debut. I wonder if this will be the No Man's Sky of languages. Does it actually work yet?


At the peak of my honeymoon phase with Golang, I went down this path too, and oh boy does it feels great and liberating, like finding a magic bullet, but soon as you start to scratch beyond the surface and dig deeper, you will find yourself trying to screw screws with a hammer, or tie your shoes with a chainsaw and ungodly things like that.

No tool deserves more love or loyalty than the productivity it brings, anything more is infatuation and a game for naive and the fool.


It's not really about Go, I think. But it makes me productive, and I like that. I don't think it's a magic bullet at all. Lots of things annoy me about it. But it's _good enough_ for quite a lot of things IMO.

But tying shoes with a chainsaw does sound kinda fun. :D


I completely understand, but the productivity is superficial in my opinion, once you need to dig your teeth deeper into anything not "cloud" and "system engineering" with Go, overall productivity plummets hard.

This is from some 10 years of Go experience. But like you say, it is not really about Go; my point is about the illusion of productivity that familiarity brings; it is very deceptive and hurts productivity in the long term.


I don't understand. How can it be an illusion of productivity when it actually produces something that's very easy to see?


He is saying what is very easy to see in certain contexts is not transferrable to the abstract. Go can be shockingly productive at certain tasks, but it also falls flat on its face at others. If you take your limited experience with Go (or whatever; this applies to all tools) where you found it to be highly productive and then conclude that it is always productive and therefore a tool you can use in all situations, you are bound to encounter negative productivity when you have a different problem to solve.

Need to write, say, a network service? Go can no doubt give just about anything else a productivity run for their money. Need to solve a machine learning problem? ... Good luck. It can be done, of course, but you're quite likely in for a whole lot of extra work not needed in other ecosystems, destroying any semblance of productivity.

In other words, the comment is a thinly veiled "use the right tool for the job".


Productivity is the sum of the aggregate output not just getting started.

If you're familiar with a tool, you might be able to "get something" a lot quicker with it than learn to use the correct tool, but on the long term, the correct tool pays off better.


Very few languages are good at _everything_.

But what I do 95% of the time is backend services with APIs and CLI tooling, for both it's amazing.

For GUIs it's hit and miss. There are some attempts at tooling, but they all feel a bit off to me. But Rust the same issue, as does C#.

We need a modern Visual Basic again =)


I think well-written and engineered Python can do this (given the scripting/easy to call other languages bit), but there are still going to be bits where you're going to be fighting uphill (whether that's because the area is actively pushing against their preferred option, or where a scripting tool is inappropriate).


If someone can't tie their shoes with a chainsaw, they just haven't dedicated appropriate time to their tools :)

Side note, this reminds me of a local YouTuber who mostly films himself using a chainsaw and has a remarkably large following (516k followers right now!). He also loves axes. I present to you Buckin' Billy Ray: https://www.youtube.com/@BuckinBillyRaySmith

I'm pretty sure this guy ties his shoes with a chain saw


Haha, sweet. :D


"The world is laaaarge. The number of projects are basically infinite. Even if I carve out a tiny subset of infinite, that’s still infinite." I liked the use of this wisdom in there.


You'll enjoy set theory then. Just call the first set aleph-null.


Thanks! I guess I have moments of insight sometimes. :D


This article is not so much about Go but about choosing to specialize in one language ecosystem instead of spreading one's attention across several.


With the caveat that the broadened perspective you obtain by learning a variety of languages will make you better at your primary language.


This is true, but it doesn't mean you should actually use a variety of languages for your day job if you're self-employed; that is, you lose some productivity if you choose an unfamiliar tool, and you'll shoot yourself in the foot if you choose a language unsuitable for the problem at hand.

The other thing to watch out for if you're in a corporate setting is that if you use a language nobody else is using at your company, your project is doomed and / or it will be stupid expensive compared to other projects because it uses a different language from the rest of the organization. See also https://boringtechnology.club/


I would say there are many, many other ways to broaden my perspective, not just learning and using a different programming language. There are so many things to learn!

(And I don't necessarily suggest not learning different languages, if one fancies that. I just don't use any other ones at the moment.)


I actually find it a bit of a curse to know too many languages. Because settling on one for any given project becomes very difficult. Like you start coding it up and then hit some friction and start thinking about one of the other languages you know that would be more elegant. Or you might have chosen a language that does a lot of things elegantly but then start thinking of the performance you've given up performance for that. Swapping languages will never stop those thoughts because there's always tradeoffs somewhere.


I always wonder: why do some people like to do this (spreading)? I wonder the same about people who are "distro tourists". The latter tend to spend a lot of time on what seems like unproductive diddling (desktop skins, etc).


1. Because I like engineering more than programming. If a problem is best solved by, say, a mechanical system then I will do that instead. But even where programming provides the best solution, not all ecosystems are equal. If I have a problem best solved by, say, AI/ML, it is often impractical to avoid Python. Likewise, if I have a problem best solved in the web domain, it is often impractical to avoid Javsacript. SQL where databases are the best solution. So on and so forth.

2. Because I like to learn. In my younger days, I found a lot of value in exploring the different ways people live. In fairness, I've toured the programming world enough by now that I am less compelled to keep go on "programming language vacations" – at some point you start to feel like you've seen it all, but believe I would be a far worse programmer today had I not been able to take ideas from a wide range of different cultures over the years. There are some good ideas out there that don't seem to ever gain mass adoption outside of their originating ecosystem.


Diff langs have their strengths and weaknesses, the same reason you don't build every object out of concrete "because it's tough" you don't build every service out of Python (because it's slow).


Because there's so much to learn from having different perspectives.

Back in 2006, I was working mainly in C with a bit of C++. One of the things I was taught was that "macros are evil", period. Then, in my spare time, I decided to try Common Lisp. I had a blast. I never wrote anything more useful than a half-assed catalog for my MP3 collection, but I learned a lot. My main takeaway was the power of metaprogramming -- with all of its footguns and pitfalls -- and took that knowledge with me to my day job, where I suddenly had a more nuanced view of how preprocessor macros can avoid being evil.

Later, when I changed jobs, I went back to Java, which was my main language before the C stint. I slipped back into the comfort of having my memory managed for me and the expressiveness of OOP. But in my spare time, I discovered Self and Io, and the fact that you can have OOP without classes blew my mind.

At that same job, I undertook the task to make our proprietary in-house language not only transpile to C++, but also compile to be executed on JVM. Understanding how JVM bytecode instructions work was easier than it might have been had I not dabbled in Forth in my spare time.

These days my day job involves working with C++ full time, but my hobby projects in Rust taught me to structure my thinking about the ownership and lifecycle of memory allocations.

So yeah, most of what I do in my spare time with other languages is largely "unproductive diddling" if you only look at the code I write in those languages, but the insights I take away with me are useful.

Also, it's fun ;)


I can think of many reasons. Seeing other language perspectives definitely can help with your primary language. I have experienced this. But, I have also explored other languages as a means of procrastination or ways to do something easy when bumping into the more challenging depths of my language of choice.


Which is unfortunate because I think Go is very useful in a wide variety of business contexts.


Then you would find this article very fortunate.


It would be a misfortune indeed if I had nothing to complain about.


The author feels happy and contented and you all are getting defensive. He’s a solo dev, just recognize that that scopes his projects and stop trying to shoot him down.

Use what you want.


I choose to read most comments in a positive light, but thank you for your kindness. :)


You seem to be confusing defensiveness with sharing different perspectives


I can't really disagree with the points the article makes in favor of Go, and it's not selling it over some other language/framework/tool but just celebrating how great of an ecosystem Go has. And it's true -- Go's ecosystem has matured into something very pleasant to work with.

By the same token I know professors who still write their simulation scripts in QBASIC because that's what they are familiar with and they can solve their problems quickly. You can use all sorts of tools to drive a nail.

On Go, it's almost a footnote in the context of the post, but I think a seriously underrated feature is its C-interoperability. Here [0] is an example. It's not unique of course -- tons of languages have some FFI solution for C libraries -- but Go's is I believe one of the most straightforward to use if you're already familiar with the language. And while there are portability/stability sacrifices you make when you call a native library, it does also expand the available dependencies even beyond "basically infinite."

[0] https://go.dev/blog/cgo


Go has one of the worst FFI support IMO. Code as comments? It doesn’t ever support C callback without a “bridge” method. MSVC support? Try debugging when CGO is enabled. Worst of it all, it’s also one of the slowest too.


You might like Zig too. The C interop is amazing. Last time I checked you could use Zig to just compile C straight up.


You're totally right about CGo. For example, I'm very happy that I can use the insanely well-tested original SQLite C library directly in Go, and sleep easy that it's not some half-ported pure Go library.

(I know there's an auto-transpiled SQLite lib as well, which is probably just as good. But then I have to rely on no bugs in the transpilation process, and I don't like that. That may be superstition though. ;) )


I understand the sentiment. My goto hammer is Kotlin, which I like a bit better than Go. But that's a highly subjective thing of course. And I use plenty of other languages as well (including very occasionally some Go).

It's not about what is better in general but about what is better for you. Better here means less time wasted with figuring out syntax, tools, APIs, frameworks, etc. Once you know how to do a certain thing in one language, having to relearn to do the same things in another is slightly annoying. Although, LLMs are actually hugely helpful for this these days.

IMHO we're about to see a minor renaissance in web development. I was playing with the Kotlin WASM compiler a few weeks ago just to verify that I could use existing web APIs. As it turns out, it's are all there. Using them might not be the fastest right now but I'm sure that will get improved over time. Garbage collection is already in (and coming soon to Safari as well). There are some inefficiencies with making calls into js that need some attention. But that's not really a show stopper unless you are doing this many times per second.

What that means is that you can just do web application in wasm; use all the stuff from the browser that you normally use but without any javascript (except for a tiny wrapper that loads the wasm). I actually use kotlin-js so it's not a big leap for me. But wasm loads a bit faster and probably compiles a bit faster too. No more webpack uglification needed (which is actually slower than the kotlin compiler). So that's 2x compiler performance right there.

The point here is not kotlin or javascript but that this now works with any language that you can get going with wasm. Including Go if you want. Javascript becomes completely optional. I'm sure some will be upset about that. But that would be mainly because it's their preferred hammer.


But you also need the GUI part. Flutter seems to be the main competitor here. What's the current situation with Kotlin? Is there a decent Compose for Web yet?


With browser dom support and CSS, that would be doable. Compose web is in alpha currently; it renders to a canvas.


I am planning a switch to Kotlin just because it seems more readable. As someone who has done both how do you rate Kotlin's STL?


The standard library plus the growing number of multi platform libraries covers a lot of stuff at this point. There are still some things that are a bit lacking but most of those things have people working on them.


It barely has a standard library. You’ll be dependent on the standard library of whatever platform you are targeting. JVM -> Java, Web -> JS.


Yeah, WASM is interesting! I haven't followed the space at all, but I guess the browser is the new JVM? :D


Heard from someone: "C++ is a hammer, but then everything starts to look like your finger"


had such a good laugh, I’m going to steal that :)


If all you know is a Hammer…

I was searching for reasons why to use the Go-Hammer when there are comparable ones such as Java, C#, etc. but the article left me wanting.

It strikes me that Go is riding the peak of hype languages, succeeding Rust and Node.js (which are all good pieces of technology and absolutely have their merit). And like with most hype driven decisions there is little (self) awareness of context and alternatives.

Note, this is explicitly not about the languages themselves, but rather the larger cult(ure) of mainstream programmers around them.


It feels extremely strange to see Go described as a "hype" language. It was somewhat hyped around seven years ago, which is when I started using it almost exclusively. Before that, it had a strong hype peak about fourteen or fifteen years ago, just after it was born.

However, it's not a young language any more. At this point I would actually submit the opposite criticism: it is a mature language which is perhaps starting to suffer from middle-aged spread, as the bored maintainers start to stuff in more features which the language doesn't genuinely need. (Example: Iterators. Sure they're nice, but does the language really need them?) If anything, I would say it's going more the way of Java than Rust or Node.


Count the number of posts with Go in the title, vs any other programming language. It easily beats out everything both here and on lobsters.


I did this about 1-1/2 years ago and just reran the numbers.

Today:

  "written|built in Rust"           1113 (1036|77)
  "written|built in Go/Golang"      1050 (889|25) / (119|17)
  "written|built in Python":         452 (422|30)
  "written|built in Javascript/JS"   286 (233|14) / (34|5)
  "written|built in Java"            115 (109|6)
  "written|built in TypeScript"       97 (89|8)
  "written|built in C#"               46 (43|3)
  "written|built in Stone"            13 (13|0)
Feb 2023

  "written|built in Go/Golang"      918 (792|17) / (102|7)
  "written|built in Rust"           832 (790|42)
  "written|built in Python":        390 (364|26)
  "written|built in Javascript/JS"  269 (220|14) / (32|3)
  "written|built in Java"           107 (101|6)
  "written|built in TypeScript"      66 (61|5)
  "written|built in C#"              41 (38|3)
  "written|built in Stone"           10 (10|0)


Well, it's a very usable language which is strongly typed (by any layman's definition), easy to pick up, easy to get stuff done in, relatively easy to live with and easy to maintain. If it's hyped then it's only in the same way The Beatles and Pizza are hyped.


I like that! Go is basically pizza! :D


Maybe, but 80% of those are complains of some sort.

That's the opposite of hype.


I think the hype has already peaked for Go. When we switched to Go 8-9 years ago I already felt the hype had died down. We were late to the party. And I saw that as a good thing. To me it suggested Go was going to stick around.

And now, almost a decade later, we're still using it.

Note that at my company, we're a bunch of grumpy old guys with about 30 years of experience each. If you dangle a new and shiny language in front of us, our default instinct is to tell people to get lost. We don't do "fashionable". We tend to make investments in learning languages that we expect to get at least a decade out of.

And I don't think Go is succeeding Rust. From what I can tell, people are going in both directions.


Go and Node have some significant overlap, but Go and Rust are barely even competitors. Of course there's a lot of programs that you can use either to write. There's a lot of programs where it hardly even matters which language you pick at all because they're all perfectly acceptable solutions, though, so that really isn't saying much. But if you map languages out by the programs where language choice matters at least a little and where a given language is legitimately one of the top contenders, Go and Rust don't have a particularly strong amount of overlap in my opinion. They do, but again, lots of languages have overlap; I don't see the Rust/Go relationship being particularly heavily overlapping. (Go/Java or Rust/C++, now there's overlap, for comparison.)


> Go and Node have some significant overlap, but Go and Rust are > barely even competitors.

Among the people I know Go and Rust seem to be at the top of their list. Almost all people I know who program Rust also know Go. Not as many Go programmers I know program in Rust as well, but some know Rust and even more want to learn it. Which probably isn't too peculiar because Rust is more of a kerfuffle to program in.

I'm not denying your observation. I'm just reporting that from where I am, things look a bit different.

I agree that it mostly doesn't matter that much which language you choose -- as long as it is a language that can meet certain minimum standards. In my view there is only about a handful of languages to actually choose from.

The first is that it must be possible to produce proper binaries. For instance, Python is unacceptable to me since it tends to place a large burden on the user. Next, it has to be a somewhat mainstream language with a sufficiently large community. With that sorted the more technical stuff needs to be evaluated. Like its standard library, how ergonomic the tooling, what quality software/libraries the community produces is etc.


That likely suggests the people you know are working on a specific kind of project.


Perhaps. Most people I work with do servers/backends, distributed systems and embedded systems.

I think it may have more to do with the kind of people I work with.


I wasn't aiming for that. My point is more: I know Go already, it's good enough for my purposes, so I'm using that as my hammer. I could just as well have been Java or C# I learned eight years ago and used that for everything. :)


I just began using Go literally 2 days ago, but I could already see building most of my projects in it from here on out. It has the type system I'd need from Java or C#, while being almost as simple and readable as Python. I love the module system so far and enjoy not having to decide on my own formatter.

Go is perfectly boring and simple, and seems to get out of my way. At least, that's how it feels as a newcomer. I'm sure it will change over time, but I'm having a blast at the moment.


> It has the type system I'd need from Java or C#

Its type system is much more primitive than either one of those.


I've been using it for work projects for a year or two now and I still feel the same way as you.


It actually gets even better! :D Enjoy the journey!


What about the absence of exception handling?

Plus OO has its own advantages, even if you don't use it all that much, its one of the best ways to fit problems into neat design patterns.

Go seems to be missing those features.


In modern Java applications, interfaces are favored much more heavily than OO designs relying on inheritance.

And Go has very nice support for interfaces. To the extent you don't even need to declare support for an interface. Go just figures it out if your type supports all the methods in the interface.

If you mean classical Smalltalk style OO, no Go doesn't support that, but neither do Java or C# or C++.


Interfaces and functions do pretty much everything inheritance does. It's just cut in a different direction. If you go in demanding that Go support inheritance you'll be in for a bad time; if you go in demanding that it have some good solution to those problems that you may not be used to, you'll find it generally has them.


Go has exceptions and exception handling in the form of panics and recovers, but go devs are discouraged from using them.

Go also supports most of the typical attributes of oo. It has polymorphism and encapsulation it’s just missing inheritance and uses composition instead. Most Java and C# code basis have stayed away from inheritance (for a long time) so there isn’t that big a difference.


What you see as missing, others (including myself) see as delightfully constrained and solved in a different way.


This is my first time working with a language that exclusively uses errors as values, and I haven't had time to develop any strong feelings about it. So far it feels nice because it makes error handling crystal clear, but I could also see it becoming cumbersome over time.


It’s the exact same broken system as has been known from C, it’s literally checking `errno` all the time.


I mean, you're not wrong, It's just strings (usually) instead of ints.

It's pretty bad, but it turns out that exceptions and inverting my entire program to pass into a Result's flatmap chain are even worse.

When they come up with another strategy, I'll try that instead, but until then I'll keep using the least bad option I've found.


> It's pretty bad, but it turns out that exceptions and inverting my entire program to pass into a Result's flatmap chain are even worse

Based on what? Exceptions do the sensible thing at all times: auto-bubbling up (no random swallowing), contain info about its origin, and a handler can be specified as tightly or widely as necessary.

It’s objectively superior to grepping for an error message, or straight up not doing anything which go is especially prone to do, as after a time all the if errs become just visual noise.


> Based on what?

My experience using all three.

> no random swallowing

We have not been in the same codebases, apparently. The number of times I've come across (I'm paraphrasing):

    try {
        // some stuff that can fail
        // bonus points if there's a comment explaining why it can't error in practice
    } catch (Exception e) {}
Is hilarious. Especially when it happens in library code.

As verbose as `if err != nil { return nil, err }` is, the fact that it gets banged out so much means people default to it, and I find myself less likely to get into a weird partially initialized state in go than I have been in other languages.

> It’s objectively superior to grepping for an error message

Then use `errors.Is`?


It can get verbose, but I'm willing to live with it for all the other benefits of the language.


I mean to this end, Go is more closer to Perl(C/Unix family) than to Python(Pascal? family).

But by now most programers are so very used to making the language do a lot of error handling kind of chores that having to do them manually kind of feels doing work that shouldn't be done(Like doing what the language should have been doing).

Of course you could bolt OO onto Go. In a way similar to how Perl provides kind of bare bones means assemble a OO system. But that is not the point, having syntax to this baked into the language just makes it easier and standardised to learn and use well.

Perhaps Go is Perl done right? Than a replace for Python.

But one has to see for how long it can maintain its minimalism. On the longer run code turns to be as complicated as the problem you are trying to solve. Removing the features doesn't make the code simple, it simply make it explicit(a.k.a Verbose)


? It doesn't have exceptions, so it doesn't need exception handling. It has an error system / standard that isn't based on exceptions.

I only know exceptions from Java myself, and in practice, what it calls exceptions are often... well, not exceptions at all. Files missing, SQL queries returning no errors, division by zero are not exceptional situations, but normal day to day events. And an exception generates a stack trace, which is an expensive operation.

I mean one way to avoiding that is defensive programming - check if a file exists, do a count on the SQL query first, do a pre-check or assertion before dividing - but that adds more and more code that you need to write, test and maintain.

OO has merit for what you describe, but Go's alternative works just as well (imo) for that purpose; you have struct types containing data, you can add methods to those types to encapsulate behaviour. Go doesn't have OO inheritance, but inheritance has been out of fashion for years now so it's not missed.

TL;DR, exceptions and OO are solutions to problems, Go has its own solutions to those problems, neither of which are difficult to understand.


Throwing prevents the caller's control flow from passing into code that assumes a useful value was returned and ready to be consumed when it wasn't. Handling a success or a failure with exactly the same code shouldn't be a default because it almost never makes sense.

A Java exception can suppress stack trace init if needed, letting an instance be created once and reused very cheaply (though logs will be less useful).


The whole idea of exception handling with try/catch comes from the fact that problems and error handling can all be standardised into one structure of patterns. Once you are here you can be sure the language(compiler) and tooling(IDE/Editor etc) will generate them for you and provide you with means to handle them.

Nothing much has changed in terms of problems and interfaces. So the errors will remain the same.

Now when you use go, you will have to write a lot of code that other wise the language+tooling could do automatically for you. This is just doing work that you shouldn't even be doing the first place and pointless verbosity.

Minimal is not always better. And some parts just feel like they were omitted because the language designer didn't want to do the work for you. Whereas the whole point of a programming language is that it does as much work for you as it possibly can and make it easier for you.

Just saying writing the same/similar code over and over again is waste work.


Of course, Go has try/catch, albeit under the keywords panic/recover. The whole idea of not using it, most of the time, except for the case of exceptions, comes from the fact that problems and error handling have shown to not in any way fit well into a standardized structure. Ruby, for example, came to the same realization even before Go. This is something that was already becoming understood before Go was conceived.

Sometimes it works out. Certainly encoding/json in the Go standard library shows that it's quite acceptable to use where appropriate – the programming tools are there to use, but it turns out that it is rarely appropriate. Which is also why most other languages are also trying their best to move away from the practice as a general rule, if it ever was adopted at all.


Just looking at Go after having experience with Erlang and Crystal, does it still have a narrow-minded view of what features language won't support?

Like I remember there was a whole drama about generics, errors being simple strings and always returned without being able to raise them, is this attitude still there or have things changed?


That attitude is still there because there is no compelling reason to move away from that. A lot of languages have been diseased by bolting on features for no other reason than different languages have them, and it's overcomplicated the languages and fragmented the codebases.

Example, if you ask ten Scala developers to solve a problem, you'll get ten different solutions. That number drops quickly for less feature-rich and more opinionated languages like Go.


I agree with that and Scala example, but also languages diseased by not evolving and not allowing new widely-accepted features. I wonder what would happen with Go if they never changed their mind and kept it without generics?

My point is not about bringing all features other languages have (its just not possible unless its a lisp), but rather creating a new language and not understanding how vital something like generics is for general purpose programming. That is an orange flag for me.


> I wonder what would happen with Go if they never changed their mind and kept it without generics?

Change their mind? What do you mean? Go always maintained it would get them, once the right design was found – which was echoed by Ian Lance Taylor actually working on them even before the first public release. He alone has, what, 8 different failed proposals?

The problem was always, quite explicitly, that nobody within the Go team had the full expertise necessary to create something that wasn't going to be a nightmare later. As you may recall, once they finally got budget approval to hire an outside domain expert, the necessary progress was finally made.

Being an open source project, you'd think the armchair experts on HN would have stepped in and filled in that gap, but I suppose it is always easier to talk big than to act.


> The problem was always, quite explicitly, that nobody within the Go team had the full expertise necessary to create something that wasn't going to be a nightmare later. As you may recall, once they finally got budget approval to hire an outside domain expert, the necessary progress was finally made.

This is a very interesting part and looks like you know about what happened. I wonder how come Go team had many super talented experts (Pike and Thompson for example) with a "power" of Google behind them and generics were an issue at all, while for example Crystal being a very similar language with native code generation, similar concurrency model and no big tech behind it managed to pull it off?

I've read some comments of devs who used both and they say Crystal is just plainly better. This just doesn't make sense to me, the biggest tech company on the planet should create a language that is head above everything else, but ended up being just good.


Nope, it's the same. It's one of the things I value about the language and project.


Well, Go has generics now and you can do quite a few tricks with errors. In the end error handling logic is up to developer.

UPD: thankfully Go does not try to bring in every "feature" possible. Not without consideration at least.


No, that attitude is core to the design of Go and one of the key reasons people pick it for projects.

Generics were eventually added. But not until they thought through very carefully performance and compilation time implications.


And this is a perfectly reasonable approach. Frankly, the cost/benefit ratio of delving into another language is most often not appealing.

As a suggestion, you could have delved more into the "hammer" metaphor, mentioning that a hammer is a flexible tool: nail planks, remove nails, smash things and even as a defensive weapon! You shouldn't misuse it for drilling walls, but that is outside the required scope of your work etc.

But in the end it's a very personal post, and we typically crave here for universality and judge posts up high from a meta-perspective, haha. Often neglecting that we were not actually the intended audience.


> It strikes me that Go is riding the peak of hype languages.

Very much so. Also it seems to be a strange choice as a solo developer when its strengths are explicitly targeted at large organizations. And I think the tooling is actually a bit of a mess compared to some other options. God help you if as a solo developer you start building on top of protobufs (another basically default choice in the Golang world...).

I just don't understand why you wouldn't choose something faster and more expressive.


Protobuf is not the default in Go the default is REST. What tools are a mess? Go tooling ( runtime ) and IDE integration is very good.

Go is not a hyped language, we're past that cycle, some critical and widely used software are built in Go, millions of people rely on it.


Go tooling was relatively very good a decade ago but other languages have improved a lot since then, significantly because of its influence, and it's now about average. It's still better than python, and C or C++ of course. But about even with typescript, ruby, shit even php and ocaml have nearly caught up with their tooling. Go's is significantly worse than rust god help us and elixir has always made good excellent here.

I wouldn't call its tooling "a mess" but these days it's nothing notable either.


Go was explicitly designed for fast compile times, especially compared to C++, and I haven't heard anything to suggest that's no longer the case. It was also designed as a more modern and mature replacement for C, and I can't think of many application domains where C is still the better choice.

Python is at such a different point in the design space I don't think you can really compare the two. Same with PHP, Ruby, Javascript etc.

For OCaml, need to decide how strongly you want to commit yourself to functional programming. Also, didn't OCaml have weak support for concurrency? Has that changed recently?

Similarly with Rust, comes down how much you want to commit yourself to learning and conforming to the borrow checker.

Elixir is an interesting comparison, as it's another language that allows you to build highly concurrent back end services. The tradeoffs are that Elixir provides even more robust scaling due to the Erlang VM. But Go allows you to more easily understand and optimize the memory and CPU performance of your application.


> Also, didn't OCaml have weak support for concurrency? Has that changed recently?

OCaml has had Lwt for concurrent IO for long enough that it is now being deprecated in favor of Eio[1]:

https://github.com/ocsigen/lwt

[1] https://github.com/ocaml-multicore/eio


Well regardless of the goals or intent of its creators, it has found success as a general purpose language suited to a broad set of unrelated tasks. And in fact that is exactly how it is being recommended here by the articles author: so within this context I don't think it makes sense to exclude these other languages.

But even aside from that we were just talking about tooling at the moment.


> Go was explicitly designed for fast compile times, especially compared to C++, and I haven't heard anything to suggest that's no longer the case

Well, it’s quite easy to be fast if you are just spewing out barely optimized machine code. Compilers aren’t slow just for the sake of it.


My understanding is there are more fundamental reasons for why Go compiles faster, like how it handles "includes". And not supporting the C++ templating system.

And some things that slow down compiles are not to improve optimization. But to support complex, higher level language features. Go seems to hit a sweet spot of being highly expressive without slow compile times.


Highly expressive????? Come on… what kool aid are you on??


I consider that a win too tbh; JS land has improved a lot with e.g. Prettier and now Biome, and those were inspired by Go's formatter and stance saying "shut the fuck up about code formatting already, this is how it's formatted, end of story, go worry about more important things".


Go tooling is far better than Elixir and I would say above Rust.

You should have a look what the go command can do.


OK fair, I am a couple years out of date on go (and also rust) myself, so I'm comparing my recollection of both to elixir's current state.


Bazel.

While not Go-specific, it's the extremely popular option in the space and I've seen it bring many-a-seasoned wizard to their knees in tears.

Also managing your go dependencies if you cargo-cult other Google behaviors like monorepos tends to be painful.

Basically just cargo-culting Google behavior == pain. Choosing golang can often be part of this behavior pattern.


Bazel is not a Go default either. You can get very far just by using the default Go Modules, and this is much more common than repos using Bazel.

Throw some Makefiles and shell scripts in the mix, if that's your thing, and that's perfectly fine.


Obviously but if you look at a lot of Go projects you tend to see it.

Especially in a corporate space. It should be obvious that I'm referring to things that you see in a professional environment with professional standards. I'm getting a lot of hate where that seems to be lost on people.

And even if someone is a solo developer I don't assume that they don't ever have to work with other people's code -- that certainly wasn't my experience when solo contracting.


Because you are attributing things that are common in enterprise environments (Bazel, monorepos, and Protobufs) to Go. It's fine if you hate these things, but saying that these are the defaults of Go isn't right, and I bet that's why people are hating on you. There's nothing default about them in Go. If you work with enterprise-y code, it shouldn't be surprising that a lot of them are going to follow Google's steps.


Go was designed for use in enterprise environments. It was literally designed for Google's specific development problems.

Nothing about Go's design should be talked about divorced from the context of it being tailor-built to solve Google-specific problems.

All other uses of Golang are basically in the territory of rounding error.


The opposite seems true (regarding Go being over-fitted to Google problems).

It was designed for Google, but Google has not adopted it widely. Google is still heavily C++ and Java after all these years. The outside world loves it way more. Kubernetes isn't used internally at Google (except Cloud, which is not any different from any other cloud provider) though it's sponsored by Google. Bazel is probably in a similar boat.

Being designed to solve Google problems doesn't mean that it actually solves Google problems well, or that Google thinks it does.

I became introduced to Go through my startup and my experience was it a delight to work with in very small teams.


True for the early days of Go but I don't know about now. I think the switch started happening right after modules came out. Generics and the new range stuff seem more for the sake of growing the language than specific needs in Google.


I wish Bazel was a Go project. Unfortunately, it's written in Java.


Author here. Well, it has worked pretty well for me so far, so I don't know why I would switch. Go is fast and expressive enough for my use, and it's probably outweighed by being super familiar anyway.

(BTW, in my eight years of Go, I think I've used protobufs maybe once or twice. HTTP FTW!)


Do you have better alternatives to Protobuf?

What are the shortcomings that you find with it? At work these choices have been out of my hands but I've used JSON Avro and Protobuf and I felt like Protobuf was the least prone to errors, best surrounding tooling and easy migrations while maintaining small payloads.


> I just don't understand why you wouldn't choose something faster and more expressive.

The answer to this is as simple as genuinely asking, answering, and understanding why you don't do all of your development in assembly language.


I didn't mean faster in terms of execution time but in terms of development time. And more expressive is doing more with less code.

We're at complete opposite ends of the spectrum here -- how that indicated to you that I meant assembly...


I'm genuinely curious which language you see as having a better development time. I don't mean that as arguing, I'm actually curious. I don't know much about Go but I just began learning it 2 days ago. I'm already 50% of the way done through a really nice TUI app, and I haven't even touched the docs.

To me at least, it feels extremely productive so far.


Developing with Python is faster.


When leaning on libraries pushing what's new in computer science, like certain facets of machine learning as a prominent example, which generally aren't found outside of the Python ecosystem, certainly. But head-to-head on well-trodden computer science paths, Python doesn't stand a chance.


Only for small or new projects...


It's the same principle.

My own experience (I'm not the author) is that the investment required to reach the point where Go can be a "hammer" as in this case is lower (usually significantly lower) than with "faster and more expressive" languages.


Go might not be the most concise or expressive language, but it's quite fast in terms of development time, IMO/IME.


Yeah, you just write code without worrying about whether it can be done more concise or clever, it's a very pragmatic language.

The downside is code volume, but honestly that is rarely the problem in software.


Faster on what axis?

Go has maybe the fastest compile times, for example. Which is very important when you are iterating on a project.

What do you dislike about the tool chain? Seems better than most languages to me.


Java’s toolchain may leave some to be desired in ergonomics, but compile time is just as fast if not faster.


What do you see missing in the Java toolchain? Frankly, I see it as one of the most complete and deep of all languages.


Only ergonomics, I do actually like most of it (maven central/repository model and gradle). It is one of two truly general build tools (other being Bazel), and being that generic/capable does mean that it can’t be as user friendly as a single-language build tool like Go’s or Rust’s. But these all break down the moment you introduce another language/build step/whatever, while Gradle can build the whole hodgepodge that is the Android platform just fine, which is quite a testament to its power.


Do you have better alternatives to Protobuf?

What are the shortcomings that you find with it? At work these choices have been out of my hands but I've used JSON Avro and Protobuf and I felt like Protobuf was the least prone to errors, best surrounding tooling and easy migrations while maintaining small payloads.


I think he could have chosen those alternatives just as well, he just happened to pick Go. His argument is that he strives to use his chosen tool in as many situations as possible instead of having a bunch of different ones.


Yup, this exactly!


I am someone with significant experience in Java, C#, C++, and JS/TS. I am a bit of a newbie to Go but have used it for a few things at work. So, I can give a bit of a comparison for you.

I really like the DevX and strong standard library of Go. Also I really like that the community around Go has a strong preference for just using the stdlib and not going crazy with third party libraries.

Languages are just tools but I have a strong preference for Go for backend services and CLI applications now. Just my 2cents.


Hey, thanks for the reply. That sounds really nice!

I wonder, which aspects of the "Go" std lib are considered strong? Would you say it is more like .net where the std-lib also comes with framework-grade capabilities such as REST-endpoints and setting up applications?

Because in the classical sense of a library, I think Java is fantastic (collections, strings, NIO, HTTP 2.0, etc.). But it misses framework-level out-of-the-box solutions.


So take this with a big mountain of salt as I am newer to Go and my experience with .Net is a bit dated but I do have 7 YoE with C#.

The C# stdlib is larger and more batteries included for sure. But, it feels less focused than Go to me. Over its history it has supported a ton of different styles of programming. Whereas Go’s maintainers have been, at least so far, more obsessive on keeping the language small.

As a result you get libraries and codebases that have a mix of legacy and modern flavors of .NET. That isn’t the fault of the language but more of its large history and wider scope. GoLang doesn’t have a rich UI library built in for example whereas .NET has 3 if I am remembering correctly.

I like the focus of Go but C# is a great language and .NET is an excellent framework.

Here are some things that I think Go does better though:

- TLS and x509 stdlib is fantastic

- Explicit error returns instead of Exception throwing

- A developer culture around idiomatic Go. It’s subjective but it sets a tone for a single coding style.

- Less “magic”. This is more of a sin of Java frameworks but .NET has lots of higher level abstractions the push lots of interactions behind the curtain.

- tiny docker images and super fast startup.

Here are some things I miss from C#:

- really nice Enums

- Better support for functional programming.

- LINQ to objects

- Annotations, specifically validation logic as annotations.

- JSON handling is more straightforward

.NET is super capable though. Nothing wrong with using it and it is much more broadly applicable than Go in my opinion.

But, I like the ergonomics of Go and for backend services and CLI tools I think it is a strong choice. But, I wouldn’t choose it to build a game or a desktop/mobile GUI application currently.


Not STD comes close to .NET, IMO.


How can you call Go hype driven?

It was literally designed to be "boring".


I think it was the Go Marketing Team who decided to call it a boring language. "Boring" tested well for hyping the language.


Go is getting hype because TS is old news. And people got tired of being nerd sniped into trying Rust and being let down.

It’s okay, hype cycles are good since it shakes up mainstream opinion and gets people to try stuff. Like a new trendy food. I don’t feel much in the ways about it.

How do you feel about Go outside of the hype cycle?


How is Go not older news than TS?


I think hype is more about trends than it is about age. PHP is going through a bit of a hype phase right now, for instance, mostly because of Laravel and the fact that the language has been silently improving a lot since the wordpress days, and developers are just now starting to realize it.


I believe there are many different hype cycles at once, depending on ones vantage point. The world is large!


> Go is getting hype because TS is old news. And people got tired of being nerd sniped into trying Rust and being let down.

Sounds like my impression as well.

> How do you feel about Go outside of the hype cycle?

I think it's in a weird place, but I have a much higher opinion of the language ever since it got generics.

There is nothing which is inherently unique to Go, but it got some neat technical features: Static compilation, garbage collection, good model for multithreading, fast compilation times. With these qualities you can get pretty far and have good cloud readiness.

However, I'm a Java guy and the JVM is my working horse. I have seen how large projects (don't) scale once you have 8+ developers and there are shortcomings in Go for projects of this size. And I think the explicit nature of nominal typing (e.g. Java) absolutely wins over the implicit structural typing one (e.g Go).

You see this difference of philosophies by interface constraints: Java interfaces must be implemented explicitly and you can even limit who can implement them. Java is primarily a language designed for libraries & frameworks and the quality shows it.

With structural and open implementation of interfaces, your line of code-ownership becomes blurry. This is not nearly as problematic as dynamic typing, but in my mind an interface is a specification. And you there should be a contract between the provider and consumer. This also helps with understanding the connections inside/between code-bases.

Same about interfaces can be said about First Class Functions vs. Functional Interfaces. Functional Interfaces can be documented, domain-specific, extended and implemented. It feels like a perfect blend of FP and OOP, whereas with First Class Functions I feel like I am working with C-stylee primitives which don't help me express the nature of the solution I want to model nearly as much.

As for Java, I have an immeasurable amount of respect for the current language designers (can't say I have the same respect for Go Designers after the drama surrounding generics). I am an avid reader of their mailing list and time and time again, they have shown deep thought and tremendous foresight (their way of work is quite inspiring). Yes, they have to live with some nasty historical baggage, but they have a very high level of quality.

Then there are aspects such as the ever-evolving JVM, along with its choice of several garbage collectors and top-tier runtime analysis (Flight Recorder) & optimization (Hot Spot).

Sorry, this is entirely subjective but you asked me how I "feel" about it :-)

Summing it up, I'd say Go is a comfortable local maximum which can give you a whole lot, but ultimately is less flexible. But it's surely enough for 90% of applications. In this sense Go is like the big brother of PHP, which is a perfectly reasonable language if all you need in your backend are save/load database operations. Go can cope with that too, but also scale into non-trivial domains, the cloud and multi-threaded problems.

But Java is "corpoaraty" and not "trendy", so most people don't even consider it as an option. And there are enough bad devs who started their carreer in JS/TS and want to move towards something more performant, that is still as ad-hoc and don't mind sharp edges because they never learned to reason about their application in a more formal sense.

But looking at the work that is currently being done to the JVM, I'm looking forward to the near future when we can run Java-compatible bytecode on GPUs and AVX512.


I'll give it a shot.

The tools make a lot of decisions for you that are pretty arbitrary, like how to format code, freeing up brain cycles for other things.

It compiles very fast, so the iteration cycle is similar to a dynamic language with a REPL. Very little time between making a change, running and seeing the new result. The language is designed from the bottom up for fast compile times even in large projects, with features at odds with fast compilation rejected.

Decent sized community and library ecosystem.

The networking APIs are very simple and productive. More and more back end code is being written in Go for this reason.

The semantics are straight forward. The other programming environment I need to deal with at work is Spring Java, which is unjustifiably complex and obfuscated for what it does. Go allows you to read the code and run the program in your head understanding semantics and performance characteristics.

Builds single file executables that are simple to deploy.

Has just enough high level features to be productive without compromising above points about simplicity, fast compiles, and comprehensibility. Channels and go routines work very well to model concurrency across a wide array of problem domains.

Maintainers are careful about adding new features. Generics took a long time to arrive, but when it did was very well thought out and did not negatively impact performance very much.

I'm sure I'll think of more.


Thanks for posting this.

It’s an unfortunate choice of “one hammer to rule them all” given Go does not offer necessary coverage at both low and high ends of abstraction (and performance) the way C# does. For me, it’s a similar kind of hammer.

(I prefer to think of it as a crowbar ;D)


The problem with C# is its created by Microsoft, even if its a good language its hard to look at it and not think how much its limited while used outside of Windows.


It seems that you might be asking this question in bad faith.

But on the off chance you are not, please find previous replies that address the same kind of question below:

https://news.ycombinator.com/item?id=41037792

https://news.ycombinator.com/item?id=41189309

https://news.ycombinator.com/item?id=41211767

https://news.ycombinator.com/item?id=41197493

https://news.ycombinator.com/item?id=41218353


I understand its probably not the case today, just that C# shares some of the MS reputation and its hard to get rid of. Especially when there are dozens of open-source independent languages around.


I think phrasing with insistence that it is "probably not the case today" despite demonstrable evidence of certainty means that the motivation for this reply was not to learn something new or share something you know but to try to get a raise with low-effort bait. This is both tiresome and goes against guidelines.

Somehow we all can normally discuss programming languages without bringing practices of Google or Oracle into the picture and focus instead on programming language development initiatives that they drive and sponsor, that are themselves sufficiently independent. Let's keep it that way.

Also https://news.ycombinator.com/item?id=41218353 (which was the last link in the list).


I phrase it this way only because I do not have enough knowledge to be certain. I've read the comments that you sent, but to be certain it requires much more in-depth investigation which I don't want to do simply because I have no plans on interacting with C# in any forceable future.

I only tried to explain how some people who are not directly involved with C# might see the language.


That's a shame. .NET does away with all the pain points you've grown to expect from Python or Java tooling, has arguably better support[0] story on the main platforms nowadays and extremely no-nonsense CLI tooling similar to Rust or Go:

    sudo dnf/apt install dotnet-sdk-8.0
    # or
    brew install dotnet-sdk

    dotnet new web
    dotnet run
    curl localhost:5050
[0]: Situations like https://blogs.oracle.com/java/post/java-on-macos-14-4 do not happen to .NET as it tries to use the platforms it is targeted at in a "canonical way". To be fair, not exactly a Java fault, but effort is invested to ensure that the runtime plays nicely with e.g. memory protection techniques.


This is no longer remotely true. Most .NET teams I know now deploy on Linux and develop on Windows/Mac/Linux.


I think it’s fine. Honestly I think there’s a lot more deep developers out there than broad developers. It’s just they don’t blog about depth the way broad developers blog about their shiny new toy.


Honestly it gave me the feeling of not arguing to use Go but instead arguing to use the full featured language you know the best. The caveat being that it is of you are a single developer making all the decisions and writing all the code. Seemed to make a good case for that


Different strokes for different folks. Used Go for ~5 years, would be happy to never use it again. Typescript is in a similar bucket, lots of professional usage (sadly still using it) would also be happy to never touch it again.

Kotlin/JVM became my hammer and I currently don't feel like there are any gripes I have about it except maybe that the Gradle/Maven dichotomy and associated anxiety that build systems give people makes it harder to sell people on it.

Otherwise language feature wise and runtime wise it's about as good as you are going to ever need for 99.99% of (non-frontend) use cases. You have C++/Rust/Zig to fill in the few places where a runtime isn't viable.


I'm happy you found your hammer as well! :D


Where I feel go is lacking is for data wrangling.

Group by, filter, map, join. It is just very error prone, inconvenient and slow to implement with for loops.


Go does support functional programming constructs (as it has first-class functions) and there are some FP libraries out there, but they are discouraged because the execution is so much slower; Go is not optimized for FP, and chooses "clumsy" for loops over clever functional programming because the loops have mechanical sympathy and are simply faster in execution speed.

That said, if you have a use case with a lot of data wrangling like that, Go may not be the best choice and a functional programming language may be a better fit.


Perhaps I’m thick but what kind of programming doesn’t involve data wrangling?

What are Go programmers doing that they don’t feel the need for map/filter/etc?

BTW there’s no reason why map and filter would be slower than loops, efficiently lowering such functions to loops was solved a very long time ago.


> What are Go programmers doing that they don’t feel the need for map/filter/etc?

As a refugee from a scala project that went badly (we eventually ported the entire thing to go), it's not so bad when you're just using map and filter and friends.

But eventually there's so many of those little methods each with their own nuances and I don't want to have to remember them all (`sliding` comes to mind), and it's just exhausting. I don't want to deal with it any more. The for loop is freeing, I've written the map/filter/reduce/groupBy functions a couple time, but I never end up using them. I don't miss them anymore.

I guess, those methods were sold to me originally as less powerful than a for loop. You had guarantees about what they're doing, and eventually there were enough of them that something flipped. The for loop feels easier. I can see everything that it's doing. It's all right there.

Same things with monads after a certain point. Result/Option are sorta fine, but I'd rather just deal with remembering to close a file than use a Resource. I don't want to have to think about Semigroups and Applicative Functors. I just want to call the function and do the thing. Eventually FP felt like my experiences with bad OO projects where I spent 80% of my time trying to figure out the platonic ideal of something and where it fit to make everything elegant. And then tracing my way through things was significantly worse when things went wrong (and they did still go wrong). I decided it wasn't worth it.

And, yeah. Sometimes I find a gronky loop somewhere that's doing too much. I just re-write it while I'm passing through so it doesn't get out of control, and I move on.


Thank you for sharing your experience and I am sure it is right for you. And I agree that Scala can be too much.

I just wanted to point out that there is a fundamental reason that for loops are inferior to map/reduce when working with data.

For loops is "how" where map/reduce is "what" and that puts the burden on you when you need to parallelize your job.

Joel Spolsky described it here (long ago)

https://www.joelonsoftware.com/2006/08/01/can-your-programmi...


> I just wanted to point out that there is a fundamental reason that for loops are inferior to map/reduce when working with data.

> For loops is "how" where map/reduce is "what" and that puts the burden on you when you need to parallelize your job.

That's not inferior. That's one advantage that map/reduce has in the set of trade-offs between them. I'm aware of that argument. It's part of what I was trying to get at with the "less powerful" comment (the other side of the coin of being less powerful/more constrained is that it's easier to parallelize).

But ~95% of code never makes it to that step. And I've found that defaulting to it leads to worse code bases, instead of defaulting to the simple intuitive thing and accepting the burden for the other 5% of the code.

Because the burden of that 5% of the code is never the biggest part of scaling. It's usually something like getting the business to understand the implications of CAP theorem and figuring out what trade-offs are best for them. The code is pretty easy, by comparison. Even if I have to chuck out 5% of the files and re-write them over a period of time.

I guess my recommendation is basically to do things that don't scale in your code, because 95% of code won't need to, and the other 5% has a good chance of being re-written anyway during the process of scaling.


Those methods are not magic pixy dust. Chain a few a those and you've looped through the entire array 3 times, instead of one for loop.


> Go is not optimized for FP, and chooses "clumsy" for loops over clever functional programming because the loops have mechanical sympathy and are simply faster in execution speed.

So are iterators in Rust which allow you to write idiomatic iterator experessions. Hell, even LINQ in C# has improved dramatically and now makes sense in general purpose code where you would have erred on the side of caution previously. You can pry ‘var arr = nums.Select(int.Parse).ToArray();’ from my cold dead hands.

At the end of the day, it is about having capable compiler that can deal with the kind of complexity compiling iterator expressions optimally brings.


Mechanical sympathy now favors SIMD instructions and hyperthreads because sequential loops are slower even unrolled.


With generics you can now write all of those in Go.


And it would be an order of magnitude slower than plain loop because compiler is not smart enough to optimize it.


None of these reasons are specific to Go. They apply just as well to any Turing complete programming language.

Not to say the arguments are bad. Just that the argument is for picking one tool and using it for everything to benefit from your investment in that tool across all your projects.


Yep, totally agree! But I learned Go, so I wrote about that. :D I tried Brainfuck once, but I don't think my brain can wrap that enough to make it a career.


I wish "any Turing complete language" wasn't used as a catchphrase.

Especially that such a claim is so stinky. In what way the Brainfuck programming language environment is giving the coder same experience as Go, huh? Because, if it's not, I just invalidated your claim.


I really want to love Go for side projects but it’s just so verbose and time consuming.

I understand that a framework with ”batteries” goes against Go principles, but coming from Rails, which solves the common boring problems for me, Go just makes it too much of a pain.

At least as far as web dev goes, I’m sure Go is great for other stuff.


Back in 2012 or sometime around it, I was trying Akka a Java library and trying concurrency and stuff. Around the same time I gave Go a try and it was much less verbose and simple. Never looked at Java after that, but I never felt Go is verbose.


Related:

Java for Everything

https://www.teamten.com/lawrence/writings/java-for-everythin...

Previously HN discussion of "Java for Everything":

https://news.ycombinator.com/item?id=26934297


I find Go akin to C ; it's really fast to pick up, I can use it without internet if I have to, it has enough functionality and enough stable libraries for me not really to have to bother with the latest and the greatest of everything like in the javascript world.

I use it when I cannot use CL (for basically everything) or Racket (language / code generation), which basically means 'if my clients doesn't accept the above'.

For web/desktop/backend CL and Go are both incredibly productive. CL for me is more productive, mostly because the effortless starting, far more expressive (do a lot with very little code), better repl, debugging, save and die etc. Single binaries are great about both and so is lightning fast compilation.

I guess I have two hammers; one of them has a more comfortable handle for whacking in those slightly more difficult nails.

Lately I cheat by using a subset of CL and generating the Go code.


> So, what, I’m going to limit my career options?

I don't quite get this sentiment. In my experience, the career opportunities come from solving worthy problems, as opposed to using a particular language. Plus, I don't believe that an engineer should be identified by a language, as in a Go programmer, or a Java programmer.


But often, a client needs a developer for an existing stack, and advertising as a Go programmer (for example) gives you an edge over someone who doesn't, in my experience.


You could have written the same article about C#.


Yes, I definitely could have. But I don't know C#, so that would have been an awful blog post. :D


Indeed, this article would read just as well after applying s/\bgo\b/C#/ig, or should I say:

    Regex.Replace(articleText, @"\bgo\b", "C#", RegexOptions.IgnoreCase);
Or indeed most general purpose languages, which the article itself mentions.


The C#'s regex engine would have run circles around Go's here though while doing so :)

https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa...


For me, this is very true. Most bioinformatics is done in python, but over the last ~4 years I've ported everything I do over to Go: https://github.com/koeng101/dnadesign

I'm just massively more productive, and the fact that I can read code I wrote years ago and fully understand what I was thinking at the time is amazing, and I haven't experienced that in other languages. I've learned other languages quite in depth, but with Go it is simple enough that when I write code, I'm not thinking about code, it is purely the problem being solved, and the code just comes out onto the keyboard.

Ironically enough, I've recently started porting my entirely-go bioinformatics package to be a python package, mainly because I realize I'm not gonna convince everyone else in my field that Go > python


For me, expertise in a particular language is rarely very useful. Expertise/specialization is mostly a time tradeoff for particularly thorny issues, but IME they rarely come up in a programming language context (usually it's domain or particular codebase-specific). That being said, Go is a really good "workhorse" language.


Been using go for nearly a year. Don’t like the strange formatting and naming standards and arbitrary rules surrounding it.

The worst thing I hate is the packaging rules. Once something is out in a folder it becomes a package. Once something is in a package there can’t be circular dependencies.

There can be circular dependencies within packages and files but not with other packages. This is fine except in golang and life in general people like to organize things by creating new packages. Why? Because people like folders. It’s instinctive. Rather than programming a project in one flat folder people like to throw their code into little modules by making a bunch of folders organized by semantic meaning rather then dependencies.

This ocd drive that I believe is some intrinsic part of human nature to organize things by semantic meaning in folders results in almost every golang project becoming some massive organization problem on the right place to put a function or a method. You spend an inordinate amount of time organizing things thinking that if go tells you there’s a circular dependency you did something wrong.

Nothing is further from the truth but people just love go so much that they trust this. What is happening is, the go way of making packages and your semantic methodology of organizing things into folders is colliding. Go doesn’t say there’s anything wrong with circular dependencies. The language completely permits circular dependencies in that if you want to create something with circular dependencies (which is an extremely common thing in languages and complex things outside of languages) go says you can’t have folders.

It’s the most strangely arbitrary choice and turns every project into an exercise of organization. Golang pits the human desire to organize things semantically with an opposing rule of organizing things into a tree of dependencies. A lot of people love spend so much extra time resolving this conflict because it makes them think they’re making things “more” organized and cleaner when it’s, in reality, a pointless effort trying to resolve two arbitrary and conflicting rules.

I know people love go. This is just my personal opinion of it. I don’t really like it. I want program organization to be seamless and simple. Go is not this program for me.

Rust handles organization better. Just better ux by allowing people to use folders as they intended to use them. Rust also has its own set of usability problems. But those problems are explicitly implemented for a specific tradeoff. In go the folder thing is completely arbitrary.


The article claims Go supports building GUI apps and links to this.

>Wails is a project that enables you to write desktop apps using Go and web technologies

At that point why not just use Electron and Node JS.

I like Golang, I truly do. But after building mobile apps in both Golang and Fluter, I'm well aware of Go's limitations.

Making anything look remotely nice is painful. Things get really difficult when you use the wrong tool for the job.

A much better argument could be made for JavaScript being a language that can do anything. Even then when ever you run npm install you need to pray the house of cards that is modern JavaScript doesn't collapse.

C# is also a contender, but people, particularly the FOSS crowd doesn't like it because Microsoft == Bad.


Sorry, you have been blocked You are unable to access maragu.dev Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

What can I do to resolve this? You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.


I feel mostly the same as the author. The Go ecosystem is so simple, logical and smooth that it is hard to reach out for something else. I do use other languages for one-off programs of course, be it bash, perl or javascript, depending on the task.

On bigger projects, the first pain point to appear for me is dependency management. It feels so antiquated in most other ecosystems, with loose compatibility contracts that add mental overhead. Go let’s you focus on the problem you are trying to solve, and you get so used to that luxury that using anything else quickly becomes painful.


I really hope that Julia catches on. It feels like it solves a lot of pain points Python programmers have, while still being Data Science focused. It just needs to continue to grow its ecosystem


as the saying goes, when you're a nail the right tool is always a hammer


I guess we're at "I am Go and also a hammer and all the nails" then. :D


I wish Go had a decent UI framework though.


Can you give me an example of a decent UI framework? Or frameworks?

(I'm not a GUI programmer, but I've been dabbling a bit using Fyne. I figured that in the C++/C# sphere UI frameworks especially on Windows were sorted out until I read an article by some frustrated individual who listed all the problems he has with various UI frameworks on Windows. So I realized that perhaps this isn't as well sorted as I thought. I don't actually know)


> Can you give me an example of a decent UI framework? Or frameworks?

The obvious answer is Qt, but Qt can be a bit of a chore to get into. Same is true for GTK. In my opinion more accessible options would be Fyne or Gioui. All of them have good Go bindings and are suitable for "standard fare".


Anything that has a visual designer where I can combine drawing and drag&dropping components and code depending on my needs.


Cogent Core looks promising.


I wasn't aware of that project. This looks pretty new, but after clicking around a bit in the documentation looks very interesting. I get the impression that it isn't as verbose as a lot of other frameworks I've seen.


Windows situation is kind-of a mess, but cross-platform story is in a good state mainly thanks to AvaloniaUI and Uno (and you can always use things like SDL2 or MonoGame if you feel like approaching the problem in a more “hands-on” way).


I mostly work on Swift and macOS, so SwiftUI.


TBH this can be said about most programming languages, and this is one reason why Node / Electron based UIs are so popular because there's so many well-developed UI options then that are automatically cross-platform.


Very few languages do. Rust doesn't have one either.

UI work requires very specific things to be integral to the language to be fluent to use.


Go has FyneUI, and it's great


> But why? When the common wisdom is to always take the problem at hand, analyze it, and then choose the tools, why would I ignore that and go

Sounds like they are choosing the right tool for the job.

When considering languages, familiarity is a significant aspect. And, the smaller the duration/size of the project, the more significant it is. A decent analysis wouldn't ignore this.

If the language you're most productive with is appropriate for a project (and go is appropriate for a wide variety of things), you need a good reason not to use it.


I recently tried to setup a PHP / Laravel dev environment on Suse Linux box. Installed PHP and all the stuff accordingly, then following one of the get started doc from Laravel. You know what happened? Doesn't work, broken. I assume that way Laravel created Sail, which I didn't use. Much different case with Go, installing the toolkit and development environment is easy. Even with free vscode, everything is just work, PHP+Laravel seems encourage you to buy all dev things.


At the same time using purely php is about as convenient as it gets. Just rsync, ftp or scp your index.php to the server and deployment is done.

PHP+Laravel is more akin to ruby on rails or python django. Pure PHP is similar to Go in terms of the deployment scenario


The GUI apps built with fyne framework are, at best, toy projects. I am not convinced go is a robust solution for building native GUI interfaces.

> Reason 1: Go can do basically anything That is a weak argument. All languages can do everything (for example, you can build GUI desktop apps in PHP). If omnipotency is the main criteria, then C# or Java are better alternatives than go - you can even build an OS on CLR/JVM.


I would argue that a language's ecosystem matters much more than the language itself. This is why Java is still so popular, and why certain languages are better suited to certain tasks than others. For example, if you're doing numeric or scientific related development, it's hard to beat Python even if Go itself is better, because of the great set of robust libraries you get for free. (Yes, Go has some equivalent, but not as tried and true as numpy, scipy etc.)


Go is not a single tool. It's a vast set of tools. Using mostly Go is kind of like using mostly Milwaukee power tools. Then, when you need a tape measure or a level or a tool bag, if you're reasonable you can use another brand, where it makes sense.

You still should always try to use the right tool for the job. Doesn't mean it has to be the absolute best tool or such a thing exists, the best tools are often the ones you have ready at hand and know or can learn how to use.


While this is true, especially in larger corporate settings there needs to be some pushback if someone tries to introduce a new tool, because a new tool or language means you need to include it in hiring and training.


This is the same thing everyone says about PHP, negatively.

Don't make Go the next PHP. PHP has some good updates recently but it still has some people with a negative experience with it.


PHP fell foul of adopting other languages' features to try and make itself more popular though (OOP, typing, err... I don't know PHP anymore), Go is actively resistant to it. Or, resistant as in, "sell it to us" where it's a really hard sell to add something to the language that can already be done in a different way. The error handling debate from a few years ago was great, a few really well thought-out proposals were made but in the end, people were like "...this is not an improvement over how we do it right now".


You don't have to use all these new fancy PHP features as long as you don't use any third party libraries. I still write PHP just like I did in 2002 and it works.

It's not as pedantic as Python which makes it pretty useful for small data manipulation tasks which I have to do every now and then.


Same here. the things i do, go is great. for front-end, i use quasar(vue/js). that's all i need. if i ever need crazy performance, i'll do it in zig, once it releases 1.0. chasing performance merely for the sake of performance is waste of time. use what you know first, optimize later. and go can handle anything, except 3D games. but that's more about it being GC language more than anything else.


It's really nice that Go and Rust have one standard package management system, and a built-in standard formatter. Both of these things seem obvious now, but they were innovative when they were introduced. They add a new set of "batteries" to the "batteries-included" mindset. And they put community and ease-of-use first, which are crucial for adoption.


I think opinions on Go depend on the quality of code you're used to working with. I was lucky enough to be on a small team of great engineers for much of my career so far, and my impression of Go is that it prevents you from writing good code. However hearing others' opinions, I suspect it also doesn't let the bad code get too bad, which if you're used to bad code, is desirable.


I like that.

I have a similar attitude with Python and Go as back hammers and Quasar (Typescript + Vue + Vite + components) as the front hammer.

Whenever there is a date in Python I exclusively use Arrow (even for the simplest, most basic ones).

I know it is not effecive, but I am an amateur dev and having these hard rules keeps me from testing something new all the time.

I leave this for home automation and docker services where my motto is "fix it until you break it"


Java was my hammer back in the day. In a few more years, I'm banking on it being my post-retirement "daddy needs a new boat" solution.


s/Go/Haskell and I feel this 100%

when you have a programming language that can do everything and vibes with how you think, you're golden


I'm totally with this idea - use the language you know best for mundane tasks.

I used to switch to shell or Python to do one-off scripts. But there's not a super great reason why I can't do the same in C++, which is what I know best. All the build stuff is easy to do in my company's repo, so that's not a big blocker.


I know its great and powerful but Go was always the "Google language" to me and as an unsufferable hipster that just turned me off to ever touch it.

I want all the things I spend my years studying to be built by committee or otherwise brief specimens of creative genius. Anything else makes me feel like I'm just learning Marvel lore.


I grew up writing C/C++ and I could write this same blog post but about Python for the same reasons that the author cites :-)

Sometimes I wonder if I am just being lazy and justifying not learning new stuff but then I look at the new stuff that keeps landing in the Python ecosystem and conclude otherwise.


I was hesitant to learn Go after 10+ years of Python, but then I started work on a side project where performance translated into actual money savings for me. My Python code routinely took between 100-200ms to run. I was happy, but I was also curious if I could do better without rewriting it in C/C++. Go proved to be easy to learn and being compiled it was much easier to deploy (that was before I could rely on Docker for packaging). The big surprise came when I ran my code and found out that each request would take 10-20ms to process same payloads. I have never written a line of code in another language on the backend ever since. The cherry on top is support for multithreading/multi-core CPUs. I've been in Python by day client work), Golang by night (my own work) mode for many years now. It's a great language. OK, I do admit to learning Rust, but... it's just out of curiosity, not out of need for anything "better".


I love Golang, I used for backend, have some GUI apps but I think GUI packages and frameworks for Go are still immature, none of them convince me.

I would like a tool in go, that could handle nice GUI, compiled for WASM and run in the browser. Any ideas?

I will follow you @markuwsw.


> Go is my hammer, and everything is a nail

> Less context switching

For the very same reason, my hammer is javascript.


I like using this great library to build progressive web apps using Declarative Syntax https://github.com/maxence-charriere/go-app


I want to use Go, but whenever I compile a small CLI tool using it and it comes out as a 10 MB executable, I just feel ashamed. Especially when Zig and many other tools can produce the same thing at a few KB.


In my case after hearing one of the main devs of Go (I forgot his name tbh) in an interview saying they'd NEVER implement generics, it just rubbed me the wrong way.

The other thing is, despite having touched Go since its early days (late 2000s) I don't have enough real world experience with it. I'm too self-critical to ever apply for jobs I don't feel confident in. So until someone just offers me work with it I probably wont use it as much as I could.


> it just rubbed me the wrong way.

On the other hand, they did implement generics - indicating a willingness to listen to community feedback and change their position. Isn't that what you want in project leadership?


Go is nice but people either love or hate the error handling.

There have been some good proposals but because people are so passionate about all their requirements and edge cases, that I don't think it'll ever get improved.


I feel the same, but for me Rust is currently my hammer ;)

Checks all of the checkboxes.


The fact that monorepos are so easy helps make Go so versatile. It is so easy for me to make an adhoc CLI for data fixing which uses my core service code, for example.


Same, but s/Go/Scheme/g. For over 20 years now.


"Reason 1: Go can do basically anything"

So how do you build an OS kernel in Go? C folks have been doing it for decades. Rust guys are slowly catching up. And Go...?


I came for the programming language discussion, but I want hear more from the author about their contentedness with being a solo developer for life.


Same here, as a solo entrenpreuner who is 50+, I love Go for it's simplicity. And e.g. migrated my Newsletter generation pipeline from Python to Go.


Does the same apply to 'platforms'? For example, is it better to learn C# + F# + Powershell instead of C# + Scala + Bash?


JavaScript and TypeScript are the hammer. Go as an oil lubricant. C# as a Swiss Army knife. Java as a toolbox.


Wish I could dedicate life to one language, like F#. I think you'll have better luck doing it with Go.


Go is terrible at shell scripts. Too bad it’s not more like python that way.


Check https://github.com/bitfield/script for shell-like scripts in Go.


> all popular programming languages can do basically anything

That's just very not true...


We need some example here, otherwise any language can generate code in the target language to do anything. And in the end all languages generate machine code one way or another so thats pretty much the same thing.


Got an example?


SQL might be the most popular programming language. Perhaps you can think of something it cannot do, practically speaking[1]?

[1] Theoretically it might be able to do anything, but the context is clearly talking about what is, not what could be.


It's true in terms of Turing Completeness.


Turing complete just means you can simulate a Turing machine.

A Turing machine is a mathematical model of a computer, it’s not a real computer.

Real computers can do things that Turing machines can’t do, e.g. generating random numbers[0], interfacing with hardware, etc.

[0] https://en.wikipedia.org/wiki/RDRAND


Granting the indie-dev, limited resources, focus-is-good, to me the key is enablement by available libraries, platforms, IDE's, language survival and importance, etc. -- none of which the author mentions.

Java for the most part has libraries and IDE's due to its history, but got tripped up on the essential web platform story: it's only achingly small/fast enough for real servers, and just flat out gave up on the browser after the javascript onslaught. For future-proofing, anti-Oracle bias has hamstrung Java of late (notwithstanding the excellent upgrades and engineering Oracle put in).

Go is great if you're on the server side with Google-like concerns, and it's unlikely Google would ever drop Go.

With Rust, the language offers the most for serious systems programming, but the learning curve limits available libraries (converse being true in javascript land). Rust is still early-adopters - likely the best talent-wise, but not scaling.

Swift is interesting. Can be as easy as Java, but is becoming as correct as Rust wrt lifetimes and more deployable than Go, to both server and embedded. But no real incentive from Apple for deploying on Windows or to the web, so that's handled by a few heroes. And unfortunately, libraries are sort of an open-source zoo of minor offerings. But Apple's betting the company on Swift, so you can, too.

Python pretty much lucked into popularity by supporting the scientific computing that would become data analysis and AI after building significant community inertia. It's sort of the default prototyping language in a time when prototypes are often good enough. It's gradually been adding typing and performance to stay good enough.

So Python would be my recommendation as the one language to rule them all for indie developers, who are more likely to be plumbing together applications than writing database engines. It's also where the money is now for most developers.

That said, it may depend most on the market for skills. It might be easier to build an indie business as a Go developer because the supply/demand curve favors you. And as far as I know, there's no good data on point for that.


If you’re going to insist on using the same tool for every job, at least pick a better tool.


Does go even have a Generic list (ArratList in Java)?


go can't do GUI go is too fat for embedded systems, tinygo won't cut it if I can script it why should I use go go can't really do web .... no one language covers it all


The fanboyism, in this thread, for such a mediocre language is disappointing to say the least.


I CAN’T BELIEVE NO-ONE HAS MENTIONED MY BEAUTIFUL TYPOGRAPHIC LAYOUT YET. :D

Just look at this beautiful test page [0]. I’m pretty sure I spent more time on that than on the blog post.

On a more serious note, thank you for all the discussion! It’s hard keeping up with all the comments, but I’m truly appreciative of the quality of discourse here.

Also, the newsletter subscription is up now, if you’re into that. [1]

[0]: https://www.maragu.dev/typography [1]: https://www.maragu.dev/blog/go-is-my-hammer-and-everything-i...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: