My complaint about Go centers around its mechanics of code reuse: static linking for all Go code; making anything Really Useful requires using other libraries, which includes those written in C, which require use of a tool to help you write your wrapper ...
If we could get dynamic linking and something less cumbersome for interfacing with existing libs, I could use Go for Serious Work.
The C and C++ code Google runs on their servers is always statically linked, and I would call that Serious Work.
In the end, static linking is almost invariably superior to dynamic linking: simpler toolchain, simpler runtime, better reliability, better performance and better security, see:
> (Remember that the real impetus for adding them to Unix
was X11 and its big and badly-factored libraries, which most of us aren't blessed with.)
Really? Today we are very much blessed with quite a few large libraries - not just X but Qt on top of that and KDE on top of that. WebKit. ICU. These come out to dozens of megabytes. Static linking would add somewhat less by eliminating unused symbols, but you'd still end up with massive bloat for your GUI processes; maybe with today's disk and memory sizes and SSDs that's not the end of the world, but it would be a rather pointless performance drag. That's not counting the difference between Go and Plan 9: I count 4,000 odd binaries in my PATH; if they each had the 1 MB overhead of hello world in Go, I'd be out 4 GB, more than I pay for the entire set of libraries on my system (with some redundancy). (And I hope you don't mind waiting for your programs to compile; despite the Go compiler's vaunted speed, it actually takes over 4 times as long to build hello world as clang for the same in C, the vast majority of which is spent in the linker.)
Security? Today's security environment requires frequent patches to library code. If you have a system to automatically re-link all the programs that link against libraries with security updates, great (make sure you don't miss any, such as manually compiled or copied binaries); otherwise, your system is impossible to keep secure.
> Today we are very much blessed with quite a few large libraries
And with magnitude orders more RAM. And just because there are large libraries doesn't mean they are either necessary or desirable.
> not just X but Qt on top of that and KDE on top of that. WebKit. ICU. These come out to dozens of megabytes.
Interestingly most apps that depend on Qt, WebKit or ICU include their own copies of this libraries.
> Static linking would add somewhat less by eliminating unused symbols
Thanks for reminding me of another benefit of static linking which I forgot to mention.
> I count 4,000 odd binaries in my PATH
I'm a fan of many small tools working well together but I can't help but feel that when you have systems with 4,000 binaries in your PATH, something has gone terribly wrong.
Also, Go's current binaries are large not because they are statically linked but because there has been basically zero optimization of generated binary size. There is no reason why Go binaries couldn't be much smaller, other than so far it has not been a problem for anyone building systems in Go.
> Today's security environment requires frequent patches to library code.
Which is not helped by dynamic linking for reasons both of complexity (see http://harmful.cat-v.org/software/dynamic-linking/versioned-... ) and because most programs anyway use their own version of such 'shared' libs (update your system's ffmpeg and chrome will keep using its own copy). Dll-hell has security implications too.
In practice people end up doing things that either nullify the alleged benefits of dynamic linking, or simply using static linking (Google deploys statically linked binaries, sometimes multiple Gb in size to their servers).
> Interestingly most apps that depend on Qt, WebKit or ICU include their own copies of this libraries.
So you're not a Linux user, ok. In that case there really are not that much shared libraries. Using a free desktop means that a large majority of all programs you use requires the same set of shared libraries. And as least in KDE and Gnome, there is a large number of small programs running in the background.
> I can't help but feel that when you have systems with 4,000 binaries in your PATH, something has gone terribly wrong.
A quick "ls /usr/bin | wc -l" gives me 3135. Considering this is not the whole PATH, I think 4000 is a normal number.
In Linux distributions, programs don't have to ship their own versions of anything. Maybe you should try it.
Using something, then reaching a conclusion that is on the "next level" often have surprising similarities to conclusions made by people that haven't started using it at all.
Doesn't mean I want to waste it. Some systems don't really have much RAM to spare, such as the iPhone.
> And just because there are large libraries doesn't mean they are either necessary or desirable.
This is an old argument, but short of rewriting the world there isn't presently much alternative.
> Interestingly most apps that depend on Qt, WebKit or ICU include their own copies of this libraries.
Not sure which system you're talking about. As far as I know, Linux distros tend to link everything against system libraries. Here on OS X, Qt is usually bundled, but WebKit and ICU are part of the system and dynamically linked against. Independently distributed Linux programs are an exception and arguably a bad idea.
> I'm a fan of many small tools working well together but I can't help but feel that when you have systems with 4,000 binaries in your PATH, something has gone terribly wrong.
Most of it I don't use and thanks to MacPorts, I have a lot of duplicate copies of tools. (Maybe not the best system, but disks do have enough room to waste some.)
> There is no reason why Go binaries couldn't be much smaller, other than so far it has not been a problem for anyone building systems in Go.
Fair enough. It is a problem for me mainly because I value very fast compilation of small tools.
This does sound awful but I have not really seen it anywhere outside of libcs. OS X libc has only a few switches (UNIX2003, INODE64) and uses them to provide wide backwards compatibility.
> and because most programs anyway use their own version of such 'shared' libs (update your system's ffmpeg and chrome will keep using its own copy).
And indeed this can cause serious problems (I remember a vulnerability report or two about some program shipping an out-of-date library), but luckily you're exaggerating its prevalence. Chrome will keep doing its own thing, but when WebKit gets another of its innumerable security updates, I don't have to redownload the 32 applications I have that link against it.
(ffmpeg might be an exception because it doesn't care about stable releases, but I also remember a blog post complaining about Chrome's (former?) gratuitous forking and bundling of libraries. It's a bad idea.)
> Dll-hell has security implications too.
Only on poorly organized systems. On a Linux distro, the package manager takes care of dependencies and generally gets it right. OS X is uniform (and willing to break backwards compatibility) enough that when there are problems, the developers update their apps.
> In practice people end up doing things that either nullify the alleged benefits of dynamic linking,
in a small minority of cases, yes; in the vast majority of cases where, on a well-organized system, I just want this security or framework update to make it to everything, no.
> or simply using static linking (Google deploys statically linked binaries, sometimes multiple Gb in size to their servers).
Facebook also does the gigabyte binary thing. It's a ridiculous waste of space, but if they don't care, they don't care; servers don't have as many constraints as user-facing computers (they have a fixed workload and expected disk usage), and are often less vulnerable to library security issues, not having to expose the full web stack + PDF rendering + Flash + GL to to any random web site the user navigates to. :)
I dont see why this got downvoted. It's the biggest issue I have with go as well.
It's a royal pain writing go that talks to C code, compared to say, lua or python, and there just _isnt_ a way to make other languages pickup go libraries and run the symbols from them afaik...
According to my limited experience (small tools such as a process monitor linking to libproc) it is astonishingly easy and convenient to call C code from Go: no need to write bindings, as Cgo allows to access C symbols directly from Go. I would be interested in what are the difficulties when dealing with larger projects.
For illustration, here's how I call the openproc() function from libproc:
This illustrates one issue I found with Cgo: calling functions with variable number of arguments is not supported. I had to define a new C function "my_openproc" with a fixed number of arguments, which you can see in the metadata of the import statement. It also includes the compiler and Cgo directives that make editing a Makefile unnecessary. The code for the whole tool is contained in one Go file.
> It's a royal pain writing go that talks to C code, compared to say, lua or python
This is plain wrong, you can pretty much call C code directly, while in Python and the like you really have to write a wrapper.
Of course in Go you will write a wrapper anyway to give the library a more Go-like API, but I don't see how this could be any worse than in any other language.
> there just _isnt_ a way to make other languages pickup go libraries and run the symbols from them afaik...
To get non-Go code to call Go code is trickier (also Go really wants to run your main()), but can be done, and there are even libraries to write Python extensions in Go, see:
Let me help you out so you don't feel belittled or insulted. Serious Work is anything that will ever leave the confines of my experimental system to be used, viewed, or edited by anyone else. Doesn't matter if it's a trivial script-like tool or if it's a complex app.
My understanding is that the static linking is a temporary measure to keep things simple and let them focus on getting the language stable. Dynamic linking will come later.
Somebody should write a Google Go to C transpiler. Mostly Google Go features are limitations or different styles in order to 'improve' C, so it should produce fairly readable C code. Then you could use any compiler, linking type, combine with any language, etc. You could probably even use LD_PRELOAD to replace coroutines with pthreads.
For instance Russ Cox did coroutines and channels in his C libtask in a few thousand lines of code. It's not pretty but it works. Include something like it behind an API as 'libgrt.o' with a few other features and the rest of the code transpiles cleanly.
...but say this existed. What would be the point of writing Google Go code? A more compiler-friendly syntax, less flexibility to make simple memory/pointer mistakes? Would that be enough or would you, with perfect C linkage, just end up writing most of the code in C and Lua? It certainly seem the effect of creating their own compiler (worse than gcc), an opaque "go" tool that does everything, project layout and hosting rules, complicated FFI, etc is to make a 'toolchain island' locking you into writing everything in Google Go, where you might not otherwise.
I guess you aren't aware that Google Go has their own compilers based on Plan 9 C compiler called "6g" or "8g" (depends on what architecture). These compilers produce much less efficient code than gcc does.
Yes, gcc now also compiles the language, but they still pimp their own compilers... iirc the "go" tool uses them by default (or maybe exclusively). Why create a new compiler? Why a project management tool that does everything except work with other build systems? Why isolate the language by making it difficult to use with others? I feel those are good questions to ask.
If we could get dynamic linking and something less cumbersome for interfacing with existing libs, I could use Go for Serious Work.