Hacker News new | past | comments | ask | show | jobs | submit login
Why Go Is Not Good (yager.io)
438 points by pohl on June 29, 2014 | hide | past | favorite | 356 comments



There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."

But, for systems programming, abstractions suck. They always, always have a cost. When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)

Therein lies Go's value proposition. It does not make it possible to make things pretty (ugh, nil). It just makes it impossible (ok, really hard) to overcomplicate things. When you write Go code, you can picture what the C equivalent would look like. You want to deal with errors? Here's an if statement. Data structures? Here's a struct. Generics? Here's another if statement, put it inside your for loop.

Obviously, Go is not the right choice of language for most things. When you're doing application development, you may be able to afford the cost of abstractions. But for tools that only need to do one thing and do it extremely well, it's either that or C. And I'm not going back to managing my own memory anytime soon.


> When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)

Using C++ template error messages to attack generics in Rust and Haskell is pretty weak, because typeclasses were explicitly designed to avoid the problems of "ad-hoc" templates in languages like C++. Error messages are in fact what typeclasses are really good at.


Type classes are so good at error messages that they were proposed as the solution to the error message mess in C++ in the form of Concepts.


As someone who's spent more time than I'd like to remember improving type class error messages, this sentence boggled my mind. Then I remembered what C++ template error messages are like and it all made sense again.


There's no question that Rust and Haskell have better tools than C++ for abstracting code. Go is just demonstrating that you can write great software without the aid (and cost) of generics.


What is Go demonstrating about programming without generics that Lisp, Java (pre-2004), Python and tons of other languages haven't already demonstrated?


Nothing, thats the point.. its not language that the creators did because they are vain, or want to prove they are smart, or know what beauty is.. its a language that are created to get things done! the beauty of it its the same beauty that we see in Unix or C.. its simplicity..

Dear God.. I dont know why so many programming languages anyway to do the same thing all over again.. just because of the sake of the sintax or the type system.. or because the guy fell sooo smart because hes using FP.. so he can fell instelectually superior to all human beings

Since C.. its all the same programming paradigm.. the rest is just detail.. the only langs that have its own way that are not cover by the C paradigm are Lisps

Really my language of dreams.. will be to use notes like in a music sheet.. this is a really different paradigm.. or use DSP with just I/O signals.. this is something new.. the rest is just vanity..

And i dont want to be the rat lab of some language designer full of himself, that doesnt think of me, the poor programmer that has to maintain the code in the lang he creates!!

This is the unix philosophy... theres too much noise, im sure these are the kind of things that make people run away from technology...

We need to somehow find our way to simplicity.. for our own sake


>its a language that are created to get things done

And the hundreds of other languages serve what purpose then ?


Im sure some languages, are created just for the sake of sintax, others to prove a marginal point, and others because the author want to be a tech celebrity.. this is "the noise" i was refering to..

We can spot really important languages, that actually put something new in the table.. with real originality and genius.. and the immitators that follow

The clear evidence to what im talking about, is that if you think in a niche that you wnat to create a program, you will have 3 or more langs to choose..

Choice is good? not always.. because once we create codebases in the langs we choose we are trapped..

All the languages we heard of, create programs somehow, otherwise we wouldnt know about.. but some are more pragmatic than others..

My sentence was to explain that the authors of the language were thinking more in the enginnering aspect, than in some theoretical or research

There are a lot of research languages out there... so in my point of view, they were not created with a more pragmatic goal..

They are cool and will push the envelope.. sure! but i dont understand the elitism here, trying to bash things that were created thinking more in the enginnering axis.. while puttting some "not working yet" but "research using the poor programmers as rat labs" in a pedestal..and in the same level as something that works, because it were design to work in a more conservative matter


Is there demonstrably great software written in Go yet? I've seen some people using it but I've yet to see a "killer app" (the way e.g. MLDonkey convinced me I should take OCaml seriously, or Pandoc convinced me I should take Haskell seriously).


A few spring to mind. Such as Docker and large parts of Google and CloudFlare.

However I think your benchmark for a language's worth is a pretty superficial one.


Docker.io?


Where is that great software that breaks new ground?


Note that the cost of interface{} is, essentially the same cost java imposes.

Luckily it's used less, but that doesn't change it really. Anything becomes a double pointer indirect.

And just for completeness : even java has better tools for abstractions.


These two statements made me cringe:

> But, for systems programming, abstractions suck. They always, always have a cost.

> Generics? Here's another if statement, put it inside your for loop.

If you care about speed (and many systems programmers do), this is exactly the opposite of what you want to do. Unlike your proposal of putting potentially-costly if-statements inside of for loops, generics/templates in c++ provide zero-cost abstraction (in terms of execution time. If you think dealing with the error messages presents too high cost in terms of developer-time, switch to clang).


If you truly care about speed you'll have different optimizations for int32 and int64.

Depending on who you are working with, the lack of generics is a blessing. Some developers can't restrain themselves and create over-complex abstractions that are used only once.


Generics generally simplify code, you know. Plus, generic code tend to make guarantees non-generic code cannot.

Picture this function:

  foo :: (a -> b) -> (b -> c) -> (a -> c)
What does it do?

As you can see this function takes 2 arguments, which appears to be functions. It also returns a function. The argument and return type of these functions are unknown, so you can't manipulate them. You can just pass them around directly. This puts really tight constraints on your code. So, assuming nothing fancy happens, there is only one correct body of code for this type signature:

  foo f g = \x -> g (f x)
In other words, function composition.

---

Generic functions have more guarantees than non-generic functions. Therefore, you are more likely to know what a generic function is actually doing.


Parametric polymorphism does not simplify code, it obfuscates it as now you have to see _what_ is calling it. A better form IMO is restricting the types allowed as it: 1. makes understanding much easier, 2. Doesn't create bloat in the form of n copies for visible symbols.


> Parametric polymorphism does not simplify code, it obfuscates it as now you have to see _what_ is calling it.

This is backward. A function, polymorphic or otherwise, does not influence code that does not call it. Modulo far-reaching side effects of course.

It's when you look at the call site that you have to figure out what this strange `fold` function could possibly be about.

I'll grant that polymorphic functions are often more abstract than monomorphic ones. But they are simpler. They are also better at separating concerns. Take `map` and `filter` for instance. They capture common iteration patterns, so you don't have to entangle that pattern with the actual logic of your loop. Without parametric polymorphism, you could not write them (more precisely, you would have to repeat yourself over and over).

> A better form IMO is restricting the types allowed as it: 1. makes understanding much easier

That's just false. If you restrict the types a function can operate on, you allow the function to do more things to its data. The more you know about its input, the less you know about the function. With parametric polymorphism, you are actually hiding information from the function, preventing it from making whole classes of mistake. Free tests!

Parametric polymorphism makes functions that use it simpler (as in, less moving parts). How could that possibly be harder to understand? Please give a concrete example, I don't understand where you're coming from right now.

> 2. Doesn't create bloat in the form of n copies for visible symbols.

That's an implementation detail, and mostly false anyway. Not every language is C++. Most languages that make use of parametric polymorphism don't duplicate code.


> A function, polymorphic or otherwise, does not influence code that does not call it.

Obfuscate does not mean influence, it obfuscates it you now have to follow the rabbit hole to where the type information is (language detail but it is how almost all parametric polymorphism is) which makes reasoning about it a pain.

> That's an implementation detail, and mostly false anyway.

No it's not 'unbound' parametric polymorphism in a compiled language has to produce symbols for any visible (not internal) function as there would be no way to know what might get called.

> Most languages that make use of parametric polymorphism don't duplicate code.

Yes any compiled language does how on earth would you call a symbol that took a bool vs a size_t (I recommend you look at how ELF works). On somewhat related note a sufficiently smart compiler could 'deduplicate' common parts of the slow path within a generic function and create calls but that's about all it could do for deduplication without performance hits.


You need practice with an ML derivative. Fetch a Haskell tutorial, then go write a little project of your choice that requires a few dozens lines.

> Obfuscate does not mean influence, it obfuscates it you now have to—

Wow, slow down. And give me an example, or I won't know what you mean.

> No it's not 'unbound' parametric polymorphism in a compiled language has to produce symbols for any visible (not internal) function as there would be no way to know what might get called.

Your lack of punctuation is hard to parse.

Anyway, it doesn't work like that. C++ for instance doesn't instantiate the polymorphic function for every possible type. Actually, it tries to compile monomorphic code first and only specialized polymorphic stuff as needed. This is why you need to actually use template code before the compiler can check it properly. (Notice how some error messages only surface when you use template code?

Let me give you an example (untested code):

  template<typename T>
  T sum(vector<T> vec)
  {
    T sum = vec[0];
    for (int i = 0; i < vec.size(); i++)
      sum = sum + vec[i];
    return sum;
  }
Why the generic code? Because I'm likely to perform sums on integers, floating point numbers, complex numbers and other fancy stuff. I fail to see how this approach obfuscates anything, since it let me write less code.

Now my C++ compiler will not compile this function for integers, floats, and every user-defined class I have in my program. If my program only uses it on vectors of integers, it will only instantiate the integer version, even if I have floats in my program.

> Yes any compiled language does how on earth would you call a symbol that took a bool vs a size_t (I recommend you look at how ELF works).

Muhaha, you foolish mortal. Let me tell you how this works in OCaml.

Under the hood, OCaml data are one of two things: an integer, or a pointer to heap data. The compiler can distinguish them thanks to the least significant bit: 1 when it is an integer, 0 when it is a pointer.

This is possible because in most machines pointers are generally aligned to word boundaries, and words are almost always 16 bits or more. Integers on the other hand have one less bit. On a 32 bit machine for instance, OCaml integers fit in the 31 most significant bits. More precisely, when the program sees a 32 bit word whose value is 43, it knows that it's an integer, and that the underlying integer is 21 (meaning, 43>>1).

You will note this suspiciously looks like a dynamic language implementation of runtime tags. It is a bit cleverer than that. First, the code is statically checked, so it never performs any runtime check with respect to this tag. The garbage collector on the other hand knows little about the program, and needs a way to distinguish raw data from heap pointers to do its job.

Now polymorphic code. In OCaml, a polymorphic function knows nothing about its polymorphic arguments. This is important, because it mean the code inside won't ever inspect nor modify the values at runtime. Take this example:

  let app f x = f x         (* app: (a -> b) -> a -> b *)
`app` is function application reified into a function. Yes it's silly in most cases. Bear with me. Look at its type. It accepts two arguments (of type a->b and a respectively), and returns a value (of type b). As you may have noticed, we have no frigging clue what those `a` and `b` mean. That's what it means to be polymorphic.

Now let's call the function on actual arguments.

  app (fun x -> x + 1) 42   (* the result is 43 *)
Okay, so the first argument is a function from integers to integers, and the second argument is… 42 (an integer). And poof magic, it works.

Under the hood it's not complicated. `app` knows that its first argument is a function, and it knows that the type of its second argument is compatible with that function. Since the first argument is a function, at runtime, it must be represented by a pointer to the heap. More specifically, it will point to a closure on the heap. We don't know much about this closure:

  +---+-------+
  | f | Data… |
  +---+-------+
We don't know anything about that `Data` stuff, but we do know that `f` is a pointer to code that will accept at least one argument.

Then there is 42. In the CPU it will be represented as 85 (42<<1 + 1). But it doesn't matter. `app` doesn't know if it's an integer or a pointer to a heap: from where it stands that word is just an opaque blob of data. The only safe thing it can do with it is copy it around. (And the static type checker ensures it does no more than that.)

So… `app` has 3 things: a pointer to a closure, a pointer to a piece of code, and an opaque blob of data (which happens to be an integer, but it doesn't know that). What it must do is clear:

  - Push the opaque blob of data to the heap.
  - Push the pointer to the closure to the heap.
  - Call f
And voilà, we have polymorphic code at the assembly language level. By carefully not inspecting the data, it works on every kind of data. No need for de-duplication.

Still, we don't have our result. We just called `f`, which has 2 arguments to contend with: its closure, and its "real" argument: 42. Now as you can see in the source code, `f` is not polymorphic at all. It works on integers. So it knows about its argument. Actually, it knows two things: the `Data` part of the closure is empty, and its argument is an integer. So it just adds 1 to 42 (possibly using some clever CPU arithmetic involving the LEA instruction), pops 2 elements off the stack, and pushes its result (43, which we represent as 87).

Now we're back to our polymorphic `f` which has this 87 blob of opaque data at the top of the stack. Well, it just returns it to its own caller, who hopefully will know how to handle that data.

---

As I have just illustrated, there is no need for duplication in the first place. Polymorphic code in OCaml generates polymorphic code at the assembly language level. And this was a naive compilation scheme. "Dumb" turns out to be sufficiently smart. De-duplication out of the box if you will.

And about that "slow path" (implying a fast path somewhere) that is typical of JIT compilation, we don't have that shit in statically typed functional languages. The "slow path" is already fast, since it doesn't perform any test at runtime!


> If you truly care about speed you'll have different optimizations for int32 and int64.

This is exactly why c++ allows template specialization, and if you don't care for hand-optimizing, you can get both implementations almost for free.

> Depending on who you are working with, the lack of generics is a blessing. Some developers can't restrain themselves and create over-complex abstractions that are used only once.

I can't comment on the competency of your coworkers, but I certainly see how Go could be useful in situations without the kind of performance-constraints which demand a language like c++.


> If you truly care about speed you'll have different optimizations for int32 and int64.

I'm pretty sure that's what C++ templates specialisation is for...


Which in c++ you can easily do, as you probably know. All you need to do is have an if statement that compares the type parameter of your template to int32 and int64. The good thing is that this kind of optimizations is transparent to the user if the code works. Even Haskell has something similar, with the SPECIALIZE pragma. Although in this case you have to trust the compiler to come up with the optimizations.


Exactly! Go makes the cost of generic code explicitly visible.

Generics encourage over-generalizing behavior that runs counter to writing highly performant code. If you care about speed, you don't spend time making your code generic. You optimize closely to your use case.


>Go makes the cost of generic code explicitly visible. Generics encourage over-generalizing behavior that runs counter to writing highly performant code.

I think you may be missing some info regarding generics in Rust and Haskell. As I mentioned in the article, there is zero runtime overhead for generic programming in Rust and Haskell. Zip. Zilch. Nada. That's why their constraint-based static generics system is awesome.


Haskell's generic programming constructs do not have zero runtime overhead (at least on GHC, the dominant compiler). Typeclass-overloaded functions take an extra argument at runtime, in which the class "methods" are looked up. In particular, the typeclass-based definition of add3 turns into this Core code, which has an extra "$dNum_az0" arg to carry Num methods:

  ghci>let add3 a b c = a + b + c
  
  ==================== Simplified expression ====================
  GHC.Base.returnIO
    (GHC.Types.:
       ((\ (@ a_ayZ)
           ($dNum_az0 :: GHC.Num.Num a_ayZ)
           (a_ayG :: a_ayZ)
           (b_ayH :: a_ayZ)
           (c_ayI :: a_ayZ) ->
           GHC.Num.+ $dNum_az0 (GHC.Num.+ $dNum_az0 a_ayG b_ayH) c_ayI)
        `cast` ...)
       (GHC.Types.[]))
Compare this to the Int-specialized add3, which does not have to be passed the extra $dNum_az0 argument:

  ghci>let add3 a b c = a + b + c; add3 :: Int -> Int -> Int -> Int
  
  ==================== Simplified expression ====================
  GHC.Base.returnIO
    (GHC.Types.:
       ((\ (a_azj :: GHC.Types.Int)
           (b_azk :: GHC.Types.Int)
           (c_azl :: GHC.Types.Int) ->
           GHC.Num.+
             GHC.Num.$fNumInt (GHC.Num.+ GHC.Num.$fNumInt a_azj b_azk) c_azl)
        `cast` ...)
       (GHC.Types.[]))
Now, am I saying the the typeclass method isn't fast, or that GHC can't then optimize that Num dictionary away via specialization or inlining? No, I am not saying that. But it certainly doesn't always do that, resulting in a performance hit at runtime. More info @ http://www.haskell.org/haskellwiki/Performance/Overloading


If you have an explicit type signature and compile with -O, GHC will auto-specialize, afaik.


Please correct me if I am wrong about boxing in Rust, but doesn't boxing produce some overhead, by creating extra heap allocations (and therefore possible memory fragmentation and almost certainly poor cache locality), as well as pointer dereferences?

I really hate it in Java when I need an array of bytes but don't know the size in advance, or the size changes, and I have to incur all the cost of boxing those bytes up. On 64-bit systems (most of them), pointers are 64 bits which is at least twice as large as the most common things you put in a list (int, float, byte).


Boxing is heap allocation. Rust lets you explicitly choose if something is on the heap or on the stack. Idiom prefers stack allocation.

You don't need to box something to make it generic.



Lack of generics is part of the reason why Go is easier to learn and the Go compiler is faster than, say, Rust.

Sure, generics don't have runtime overhead in Rust and Haskell but they have other costs. You always pay for abstractions some way.


> Lack of generics is part of the reason why…the Go compiler is faster than, say, Rust.

The speed of the Rust compiler has little to do with generics and everything to do with LLVM and its optimizations and code generation. (Run with -Z time-passes if you don't believe me.)


I don't know if it has to do with generics, but you can't blame it on LLVM.

    /usr/src/rust/src/libsyntax % time /usr/src/rust/x/x86_64-apple-darwin/stage2/bin/rustc lib.rs -o /tmp/x.dylib --crate-type dylib -C prefer-dynamic -Z time-passes
    [most output removed...]
    time: 6.186 s	type checking
    time: 5.084 s	translation
    time: 7.221 s	LLVM passes
    /usr/src/rust/x/x86_64-apple-darwin/stage2/bin/rustc lib.rs -o /tmp/x.dylib    22.44s user 0.76s system 99% cpu 23.301 total
First of all: no more than 1/3 to 1/2 of the time is spent in LLVM in this particular example. Okay, that's cheating because there is no -O, as I'm guessing your statement supposed, but 22 seconds is already a huge amount of time to compile 30,000 lines of code, so unoptimized builds are relevant to claims that rustc is slow. The other points apply regardless of optimization setting.

Second: rustc will take this long every time to recompile libsyntax every time anything in it changes. If this were written in C, most changes would only require recompiling one source file, even though, again, header files are not treated well (something that Rust does not need to replicate). In practice this means that typical latency between changing something and seeing the output in a C/C++ program is an order of magnitude faster.

Third: The same separation that makes incremental compilation work in C/C++ allows parallelism (make -j) in full builds. rustc uses only one core per crate. Again, headers reduce the gains but Rust doesn't have that problem.

Fourth: If we compare to C rather than C++, we're off by sometimes an order of magnitude regardless of parallelism. Here is some random C program (Apple as), a total of 26929 lines, compiling in 0.90 seconds:

    clang -o foo -I ../include -I ../include/gnu -I. -DNeXT_MOD -DI386 -Di486      0.73s user 0.16s system 98% cpu 0.904 total
(With optimizations it is 2.426 seconds.)

Or libpng, 32433 lines in 0.83 seconds:

    clang -o png *.c -I. -lz  0.70s user 0.12s system 98% cpu 0.831 total
With the Linux kernel, compiled without -j and at -O1 using GCC (both of which make it slower), it's not an order of magnitude off, but it's still significantly faster than Rust: make ARCH=arm zImage 540.25s user 73.71s system 96% cpu 10:33.81 total

It compiled 1572713 lines of code; normalizing to 30,000 gives us about 12 seconds.

On the Rust side, linking libstd, about the same size, takes 6 seconds, but librustc, three times the size, took 216 seconds (10 times as long as libsyntax) to get to linking. So it varies, but I guess libsyntax is representative.

I do not know enough to have a definitive opinion of why this difference exists, but judging from the difference between C and C++, I wouldn't be surprised if it were related to Rust's complexity, which includes generics.


> First of all: no more than 1/3 to 1/2 of the time is spent in LLVM in this particular example.

"translation" is mostly LLVM (in particular, allocation of LLVM data structures).

> I do not know enough to have a definitive opinion of why this difference exists, but judging from the difference between C and C++, I wouldn't be surprised if it were related to Rust's complexity, which includes generics.

If you look at the amount of code in a typical large Rust program that is related to generic instantiations, then it's pretty miniscule (less than 10%).

Typechecking is known to be slow because the way we do method lookups is not well optimized. I don't believe that this is a fundamental issue; it's more that nobody has gotten around to improving it yet.


I don't think the rust compiler has been particularly extensively optimised at this point. There's no blindingly obvious hotspots (last I profiled the biggest single time sink was hashmap lookups in type checking), but there's not been any real effort in reducing compile time yet, as far as I know.


Based on this logic I assume you code in nothing but machine code? After all even assembler is an abstraction and therefore must have a cost. Heaven forbid you should do something as extravagant as use C for something, that level of abstraction (all those long jumps and stack manipulations, oh my) must positively destroy performance. Do you also happen to program using the gentle flapping of butterfly wings to disturb air currents and redirect cosmic rays?


The rust compiler is actually really quick. The LLVM optimization passes and code gen is where most time is spent (this is after monomorphisation of generics). The Go compiler does very few optimizations compared to LLVM, which leads to faster build times but slower code.


Last I checked compilation time wasn't really a thing a ton of folks worry about (myself included). Faster machines and reasonably better compilers have mostly solved this problem. I'd take zero-overhead generics over a slightly faster compiler any day of the week.


(incremental) compilation needs to be fast enough. A too-slow build cycle can really kill productivity, both in trying out new ideas for code and in debugging. The build times of the rust compiler are pretty close to the limit of what I'll tolerate, and I would really love to have compilation speeds similar to go.


Speak for yourself - slow compilation drives me crazy, as it increases the latency between making a change and seeing the result. Not that I believe that generics require slow compilation.


Fast build time is explicitly something Go was built for, to fix 40 minute compile times.


The dmd compiler for D compiles faster than gc and yet supports compile-time templates.


This is incorrect. Generics in C++ are zero cost, since they are specialized at compile time. On the other hand if you want to write generic code in Go you have to use Object types everywhere. That means that objects have to be tagged, those tags checked with run time checks, additional pointers everywhere, bad memory layout, etc. So generic code in Go is significantly slower than in C++.


It depends on how you quantify cost. There isn't any performance cost, yes (which is what 'zero-cost abstractions' usually means in C++), but there is a) an increase in code size, and b) an increase in complexity/difficulty of understanding of implementation (and to some extent use). These may be good tradeoffs to make (in many of the areas where C++ is used they make sense), but I think 'zero-cost' is a disingenuous way to put it.


How are generics more difficult to understand than hacking your own mechanism together with casts? That is something I do not understand.


That's the thing. It's easier to hack something together with casts that looks like it's okay, and not understand that you're doing something wrong.

With generics, the novice finds cases where the compiler catches them doing something wrong, so they rewrite the code using casts in a way that looks right but is subtly wrong.

The end result is code that appears correct to the novice, the novice walking away with the feeling that generics are too complicated, and a mysterious corner-case bug that bites off someone's arm once every five years.

The nature of failing to understand is that the person who fails to understand often fails to understand that they misunderstand, or often misattribute their misunderstanding. The tool gets blamed for getting in their way of writing "good" code. On the other hand, there are plenty of tools that give perfectly sensible error messages to anyone with a PhD in type theory, but a second year university student sees "Attempt to cast non-monoid endofunctor to monad. Please uninstall compiler and shave off neck beard."


There might actually be a performance cost due to increased code size (by way of increased amount of icache misses).


The implementers of MLton, an ML compiler that does C++ style monomorphization, found that after optimization the code size is actually smaller with monomorphization. That's because the specialized code is simpler and can be further optimized. So even if you have multiple specialized copies, the total is still smaller. See here: http://mlton.org/Performance


Doesn't this assume that compiler complexity/work is not a cost?


>But, for systems programming, abstractions suck.

Could you clarify what you mean by "systems programming"? To me, that means working with embedded systems, which Go is certainly not appropriate for.


> Could you clarify what you mean by "systems programming"? To me, that means working with embedded systems, which Go is certainly not appropriate for.

The blunt but approximately correct version is that embedded means that you're running on hardware that isn't powerful enough to run a Linux kernel.

Systems programming just means you're working below the application layer. So if you take your laptop and write a device driver, or work on filesystem or networking code, you're doing systems programming without doing embedded.


Just curious : how does one run go without a linux kernel ? (without a kernel at all, please, I know about the freebsd port)


You'll probably be looking for a Go runtime that fills a kernel shaped hole. Without a kernel, where do all your syscalls go?


Yes but the point of the parent poster was that go can work on minimal embedded systems, which to my knowledge it cannot, so I enquired.


No, the point of the parent poster was that "systems" is not the same as "embedded", and that while go may not work on the latter it can work on the former.


Systems does not necessarily imply embedded. I work in systems and, as far as I am aware, the term simply means software which primarily services the hardware (as opposed to the user). Drivers, HW interfaces, anything which has to make assumptions about the underlying hardware it is running on/servicing.


By systems programming I mean writing the code that applications and distributed systems run on top of. Raft (https://github.com/goraft/raft) and groupcache (https://github.com/golang/groupcache) come to mind as examples.

That's a good point though. A lot of people mean different things by systems programming.


> A lot of people mean different things by systems programming.

Actually, no. It meant one thing until Go proponents tried to market their language and realized that their target audience didn't actually care.


Several of the initial Go designers were doing "systems programming" in the 1960s under the exact same meaning.

I'd also s/didn't actually care/got confused/ in your last sentence too.


Doesn't Go have a GC? How can you then "picture what the C equivalent would look like"?

Yes there are GCs for C, but is anyone successfully doing "systems programming" (whatever that may be) in C with GCs?


> Yes there are GCs for C, but is anyone successfully doing "systems programming" (whatever that may be) in C with GCs?

Actually, yes. But it's hackish (relies on some pretty complex macros) and requires you to adapt certain conventions. Still, it's doable and a whole lot safer than managing your memory directly in terms of leaks and re-use after free. The cost to me really is that macro magic, that should not be required but it's the only thing I could think of to make this work. To give you an idea of just how ugly this is I re-defined 'return'. Any C hacker will be able to deduce the rest from that one hint ;)

On another note, I felt - and feel - that this was not the proper solution but the various policy choices made this pretty much the only way in which it could be done. And it works.


You rolled your own? Why not use the Boehm conservative collector?


In three letters: NIH. Management decision was that all IP had to be 100% owned by the company and had to be in 'C', in spite of an enormous amount of friction between C and the project as well as a bunch of work by others that could have been leveraged if we had decided to use code from other contributors. I got called in long after these decisions were made and it was very clear they weren't going to budge on those. There is a lot more to this story but I'm not at liberty to tell. Let's just say I learned a lot.


So you had to write your own compiler, operating system, and runtime libraries too?

-- I know, you didn't think it made any sense either. I'm just pointing out that a line has to be drawn somewhere; where it gets drawn is actually arbitrary.

Yeah, I've had to deal with ridiculous mandates from on high too, though none anywhere near that onerous. In my previous job, we were writing a compiler. It was mandated to be in C++ -- the first mistake -- and we had to use smart pointers instead of GC -- also a mistake. But the completely idiotic thing was that we were not allowed to declare any exception classes. The VP of Engineering -- a very smart and experienced but very arrogant guy -- had seen exception hierarchies get out of control before and decided the solution was to ban them.

But that's on a pretty small scale compared to what you're talking about.


> So you had to write your own compiler, operating system, and runtime libraries too?

I tried that line and it did not work.


Huh. I wonder how they defined what was acceptable and what wasn't.

Oh well, I'll be fascinated to read the book if you ever write it.


Reminds me of my last^2 job: we were using Scala and the "technical architect" made two decisions that were mildly wrong in isolation but interacted rather badly: we'd make heavy use of monads for core functionality, and we wouldn't use scalaz. So I spent a while reimplementing parts of scalaz, and learned quite a lot (though I doubt I was adding much business value while doing so).


So, the implementation was "conservative", but on a very different level.

Sorry, could not resist.


No, no need to apologize. It's one of the strangest assignments I've ever had and there were a few twists to the whole story that would make for a good book.


That's true. It's not just GC actually. Slices and goroutines don't have a direct analogues in C either. But it is fairly easy to reason about the runtime complexity of these conveniences.

But like I said, if I didn't care about GC or concurrency, I'd be writing C.


Isn't the C equivalent of a slice just a struct containing a *T, a length and a capacity?


And some methods for manipulating it (slicing it), and reference counting. And macro's for automatically ref'ing/unref'ing.


Reference counting? I didn't know reference counting was used with slices, I thought they just used the GC.


Yes, but since C is not a GC-language, I thought I'd add that to reach more equivalence :).


I don't agree with the GP, but if Go were very similar to C and the only big difference would be that C has no GC it would pretty easy to picture what the C equivalent of Go code would be. Exactly the same but with calls to `free()` at the end of some functions. (or preempted between instructions at unpredictable places)

Don't make GC's a bigger deal then they are. They are a tool to remove the need to call `free()` at the right time, with the downside that you don't get to control what the GC thinks is a right time instead.


There's also the overhead of the mark phase, which has to work out dynamically what could be worked out statically in a system with manual memory management. That's where much of the overhead of GC comes from.


I'll add to that that the overhead is not proportional to the amount of garbage your application generates, it's proportional to the amount of data blocks your application has allocated. Garbage collection is also often performed by suspending the application (stop the world) at unpredictable moments and with an unpredictable duration. Garbage collection can thus be a serious problem for some type of applications. This is why the GC should be optional.


> They are a tool to remove the need to call `free()` at the right time, with the downside that you don't get to control what the GC thinks is a right time instead.

That's not actually true. They also allow you to do things that you otherwise couldn't. Try implementing persistent [1] maps or sets without a GC.

[1] http://en.wikipedia.org/wiki/Persistent_data_structure


That's still just a problem of deciding when to call free.


Sort of. But you can't statically decide it. In fact, really the only way to do it is with a GC (or ref. counting, etc.). You can't know until runtime when a node will need to be freed.

And my point still stands. The GC allows you to do things you otherwise couldn't.


I'm not writing the following to make people pick another language - if Go is suitable for your project in terms of features, runtime, tools and community, then by all means use Go. It's a fairly decent platform to target and it can further evolve to meet more stringent needs.

But if we are talking about the cost of abstractions, the biggest elephant in the room is that Go's GC is NOT optional, which makes it unsuitable for ... (1) systems programming and (2) real-time systems.

C++ and Rust do not suffer from this. And Go is not even suitable for soft real-time systems, because for that you need a GC that never stops the world - right now Go is even less suitable than Java in this regard, because at least for Java you've got the pauseless GC from Azul Systems.

> "But, for systems programming, abstractions suck. They always, always have a cost."

That's a logical fallacy, because if all abstractions suck, then why aren't we doing "systems programming" in assembly (were systems programming is whatever the definition du-jour you prefer to fit Go in)? Clearly, it depends on the project on where it can draw the line, since we are always doing compromises for gained productivity, no? And going back to the non-optional garbage collection that's not even suitable for soft real-time systems, it kind of makes the point on Go avoiding higher-level abstractions on purpose kind of bullshit.

> "It does not make it possible to make things pretty (ugh, nil)."

It's not about pretty-ness, it's about correctness - which in a language containing memory unsafe constructs that can lead to billion dollar bugs (i.e. Heartbleed), is a freaking huge deal. Rust is very innovative in this regard, because it's a systems programming language that solves many issues by means of its more advanced type system - and surely no type system is perfect, but even a single bug that's caught by the compiler, that's a bug that won't reach production.

> "It just makes it impossible (ok, really hard) to overcomplicate things."

I wish developers would stop equating "complicated" to things "I don't understand". That's not what complicated means. Here's the definition: "consisting of many interconnecting parts or elements". That Go doesn't allow certain higher-level abstractions, that's in itself a recipe for complications.

> "for tools that only need to do one thing and do it extremely well, it's either that or C. And I'm not going back to managing my own memory anytime soon"

The choice between C and Go, given that Go is garbage collected, is a false dichotomy.


>I wish developers would stop equating "complicated" to things "I don't understand".

The argument is a bit more about evolved than that, but flawed nonetheless. The argument usually invoked for Go is that, by eschewing selected language features, it prevents developers shooting themselves in the foot with unneeded complexity introduced by faulty abstractions.

I get the argument. I have seen my fair share of dug-out-from-hell complex projects. What the argument misses though is that not all abstractions are faulty. Computer science, as most science, is a game of ever increasingly abstract reasoning. If implemented correctly, the more abstract the better. The endgame is "Computer, build me a Mars round-trip ship".

Abstractions are good, when well written. They allow us to think on a higher level. Think of them as a fixed learning cost replacing a variable development cost.

Go kind of throws out the baby with the bathwater.


    > The endgame is "Computer, build me a Mars round-trip ship".
I think nobody would argue against that endgame, in the broad strokes. But I think Go can be understood as a response to [what the Go developers perceive as] overly expressive languages, languages that have overstepped our ability to responsibly abstract details, languages whose abstractions hide details that are still important and necessary to make explicit.

Go is "lower level" in that it purposefully eschews those abstractions, but I think it does that successfully, without a significant loss in expressivity, and with a more-than-commensurate gain in understandability, maintainability, performance, etc.


I agree. That is a correct reading. Where I do not agree is on the assumption that a basic feature set can be construed as a simpler language. While true for the language proper, it isn't true for real usage of language plus libraries.

Take operator overloading. It can be used to create hellish code. It can also be used to create great libraries (numpy for instance). Because of the danger of hellish code, Go makes it impossible to create numpy in Go.

In the end, while Go is simpler, a numpy in Go would be more complex[1], because the language is not expressive enough. The simplicity argument, while true for the language proper, is false for advanced usage.

[1] For example, you can't write Ma * Mb, but must remember the dot product method name.


Is it necessarily a good idea to make Go into a number crunching language?


I have no idea, but that is irrelevant. It is an example. I'm sure you can imagine others in the use case you define for the language.


> I wish developers would stop equating "complicated" to things "I don't understand".

Rich Hickey's presentation on this topic should be required viewing for everyone: http://www.infoq.com/presentations/Simple-Made-Easy


Go is not even suitable for soft real-time systems, because for that you need a GC that never stops the world - right now Go is even less suitable than Java in this regard, because at least for Java you've got the pauseless GC from Azul Systems.

This is an inadequate analysis. I am writing a soft real-time system in Go, and GC pause simply isn't an issue for me. Go allows one to greatly limit the reliance on GC. The GC in Go certainly places an ultimate limit to the memory footprint of any one Go process, but a whole lot of productive work can be done within such limits.

Also, my program is a rewrite of one in Clojure, so it ran on the JVM. Go is giving me better performance for my particular application. Also given that Go is just starting out in its development, I expect there to be some improvement in the future.


"But, for systems programming, abstractions suck. [...] But for tools that only need to do one thing and do it extremely well, it's either that or C."

C and Go are just two particular piles of abstractions. The tools work better for some problems, but that's not because "abstractions suck" for those problems.


> These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."

These languages have built-in abstraction tools (templates) that you can use to create your own abstractions. What I like about Go is the abstraction tools are primitive and allow for consistent & precise expression. The expression may not be as concise in certain cases, however, you can build in the mechanisms into your architecture.

> But, for systems programming, abstractions suck. They always, always have a cost. When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)

That is why custom abstractions to your problem domain are important. A framework or a language with lots of features will get you started quickly by providing out-of-the-box tools that you can hang your program architecture on. However, I prefer to have a custom architecture & idioms which are appropriate to the current domain & evolution of the domain.


>What I like about Go is the abstraction tools are primitive and allow for consistent & precise expression.

Well, not really consistent.

For example, try having a range loop for your own structures. Or something like make for them.

And not really precise. The need for interface{} and type switches in idiomatic Go code throws preciseness out of the window.


They claim this is a feature, not a bug. It means that range will never block or do weird stuff. Except of course when it does (bastard question for those who think they know : what does range do on a nil channel ?)

Despite all these clarity claims, go has significant pitfalls, like the nil channel above (and you will enounter nil channels). There's other things, like "what is a pointer in Go", if your answer involves "*", I urge you to reconsider (hint : what's the difference between []int and [5]int ? Is one of them a pointer ? What about channels (of course I talked about nil channels) ? Maps ?

But every type can be typedeffed to a pointer type of itself, like in Pascal (lots of things look like pascal), and result in completely unpredictable reference or value semantics (or my favorite : partial reference semantics).

Does go have generics ? YES (make, range, ...). Go has something no other language has : return type generic function types (meaning a functions meaning changes depending on what you assign the result to, like range). Does Go have operator overloading ? Is Go object oriented ? YES (including single inheritance). YES. Does go have (complicated language feature X) ? Probably yes. But all of these features are only accessible to Rob Pike, who has apparently decided that nobody has any use for any kind of tree or graph data structures, matrices, complex numbers, or so.

In practice you can catch the go team themselves in errors on the language semantics in their presentations, so I think a VERY strong case can be made that it's not at all that obvious.

But the truth is : this language, due to politics (high position of it's inventor) has 10 or so FTE behind it, with lots of paid people contributing various small bits. Is it anything more than some guys idea of his own favorite programming language ?

The honest answer is simply : no.

The only real advantage Go has is a small, yet functional and pretty complete standard library (like C++ had in the 1980s). It is an advantage that will fade, just like it's faded for every other language.


I believe Go is "popular" because of Google, it might solve a particular internal problem for them, but for the rest of the world, they're better languages. To its credit, it makes Java look advanced, and that's no small feat!


> Generics?

Built-in slice and map types cover most real-world needs quite neatly anyway.


For some applications. But once you start venturing into the realms that Rust is targeting, having user defined generic data structures is very important.


> These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."

They also tell me "now you don't have to wait for the language designers or compiler writers in order to 'implement another feature'." Not that _I_ would necessarily be this "brilliant" guy that implements these features. Most likely I will just find some third party library that does it.


The problem is when you have to maintain your code with millions of lines, and hundreds of abstractions.. those "features" will hunt you in your nightmares at night

This is the "The Curse of C++" and some languages pointed in the article while beautiful and correct at first sight are going down in the same road..

Do we use a programming language to look smart, to create correct code or to efficiently solve problems in a maintanable and sane way?

Go is pragmatic.. theres nothing wrong with that.. but i agree that adding some features to it would not hurt either (like generics and enums) :)


To understand a language, one must know what problem it was designed to solve. It isn't always obvious, or what it initially looks like it was designed to solve, or even what the community thinks it was designed to solve.

Erlang, for instance, isn't about concurrency. It's about reliability.

Go, I think, is also not about concurrency. It's about building a language that can be sanely used by reasonably large groups of people of varying levels of skill, yet still produce fairly good software even so, without the language forcing a complexity explosion to deal with it.

Consequently, this does not appeal to a lot of relatively skilled programmers used to programming alone. It isn't my personal pick of favorite language, for instance. However, if I could push a button for free, I would convert my workplace of a couple hundred developers to it in a heartbeat, whereas I probably wouldn't actually do that with my favorite language. It is not, of course, a magical fountain of code quality, but it would give me the best tools and best foundation to clean up code bases that in all the other candidate languages I know are one or another sort of mess.

If I were starting a new startup right now and Go were even remotely appropriate, I'd use it. But in my hobby projects? Not really. Except maybe to smash out a microwebsite, it's pretty good there.

So, you know, a lot of the question is what exactly are you looking for in a language? I like Haskell, but the idea of even proposing to change a project at work to it is laughable... and this is important... nor would I expect to enjoy the result two years later if I won. The mess of code that would result from people hitting Haskell with a stick until it did what they wanted it to do would be an unstoppable torrent of ill-conceived code. On the other hand, Go would almost certainly produce much cleaner code, because that's where it really shines. Maybe it isn't "good", but it's the best choice right now in a lot of places.

(It's interesting to contrast Go's approach to this problem with the other major language to tackle this problem space, Java. Despite attacking the same problem, the approaches are significantly different, and I think Go's way better. I'd hesitate to actively predict this, but Java could definitely be feeling some heat from Go in three to six years in a way that very few languages have actually managed to provide any challenge to Java in a long time.)


I have the same line of thinking as you do.. is the difference in use something you like and can work, and choose something you can use in a bigger group..

HN crowd are top smart.. things like Rust and Haskell are a breeze for people here.. but this is not the reality of the tech field.. the majority of people i know in tech, cant handle more powerful languages.. its too much for them

In the end is just that.. know how to choose the right tool for the job.. and dont do it with your ego..

Adaptation is really important.. and in your thoughts we can see a lot of that..

In Wonderland people may have the IQ to spend for the extra concepts and power a language may provide.. but experience is antagonic to this dream..

The cool thing of smart people to make complex things more simple, is that much more people are able to follow.. is the democratization of computing.. this is in total odds against the elitism we can see in some tech circles.. and im totally against it..


Rust and Haskell are made to reduce the amount of problems you have.

I've seen newbie programmers learn Haskell as a first language in under a term. So I don't believe the marketing from the Google people that their language is worse because it is simpler.

Choosing something because marketing told you it was easier is just as silly as choosing something because of ego. Calling people egoists when they choose a tool because of reasoned arguments based on evidence is simply anti-intellectual, and rude.


"I've seen newbie programmers learn Haskell as a first language in under a term."

Even as a fan of Haskell, I have to say that if you spent one term on Haskell and one term on Go, the latter students are going to be far closer to a place where you could drop them into a real job and get real work out of them. (That said, as I sit here and look back on what "one term" really constitutes, it's not much, regardless of language.)


> Choosing something because marketing told you it was easier is just as silly as choosing something because of ego. Calling people egoists when they choose a tool because of reasoned arguments based on evidence is simply anti-intellectual, and rude.

I Just dont know where in the post i did say something like that.. marketing? egoists? where did i said that?

If you happen to be above average in say english language for instance.. and you know a lot of words and sentences, poems.. but the thing is.. if you choose the speak in the english you know better, you will communicate with fewer people.. or you can just jump to a lower level to communicate so everybody can understand..

Of course you can use the advanced english with elite people.. you can teach people the 'better english' but a lot of them wont care, because they care about other things.. and i dont blame them.. (they may be worried about math or trips)

My point was just something like that.. nothing more, nothing less


If you happen to be above average in say english language for instance.. and you know a lot of words and sentences, poems.. but the thing is.. if you choose the speak in the english you know better, you will communicate with fewer people.. or you can just jump to a lower level to communicate so everybody can understand..

Intended side effect of irony?


For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.

The problem of 'summing any kind of list' is not a problem that is solved in Go via the proposed kind of parametric polymorphism. Instead, one might define a type, `type Adder Interface{Add(Adder)Adder}`, and then a function to add anything you want is fairly trivial, `func Sum(a ...Adder) Adder`, put anything you want in it, then assert the type of what comes out.

When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

The Nil pointer is not unsafe as far as I know, from the FAQ: http://golang.org/doc/faq#no_pointer_arithmetic

The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

Go is not flawless by any means, but it warrants a specific style of simplistic but powerful programming that I personally enjoy.


>The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

And what happens when you don't check? It crashes. That's the unsafe part.

These crashes are simply not possible in Rust and Haskell, and the type system notifies you if failure is possible (because the function will return an Option/Maybe).


Woa, Rust is great, but I think you're going a little far on the hyperbole there.

You can easily generate a segfault in Rust in 'unsafe' (or 'trusted') code; that might only restrict errors of that nature to code that uses unsafe blocks.

Practically speaking that's pretty common; once you enter an FFI unsafe block, you lose all type safety; but you can totally do it without FFI too. Eg. using transmute().

In fact, there's no way to know if you code contains 'hidden' unsafe blocks wrapped in a safe api in some 3rd party library that might cause a mysterious segfault later on.

You can argue that 'if you break the type system you can do anything, obviously'; that's totally true.

I'm just pointing out the statement: "These crashes are simply not possible in Rust and Haskell" <-- Is categorically false.

You can chop your own arms off in Rust just like anything else (including Go).


> You can easily generate a segfault in Rust in 'unsafe' (or 'trusted') code; that might only restrict errors of that nature to code that uses unsafe blocks, but practically speaking that's pretty common; once you enter an FFI unsafe block, you lose all type safety; but you can totally do it without FFI too. Eg. using transmute().

Not directly addressing what you're saying, but, IME people are far too quick to use `unsafe` code. One needs to be quite careful about it as there's a pile of invariants that need to be upheld: http://doc.rust-lang.org/master/rust.html#behavior-considere...

> once you enter an FFI unsafe block, you lose all type safety

You don't lose all type safety, especially not if the FFI bindings you're using are written idiomatically (using the mut/const raw pointers correctly; wrapper structs for each C type, rather than just using *c_void, etc).


Maybe "unsafe" is being used to mean different things, here. Some may interpret it as to refer to unsafe memory access. Others may use it to mean possibility of crashing at run time (due to null pointer dereference).


Null pointer dereference is unsafe memory access.


Not if you just throw something like an exception when the program tries to do it instead of actually accessing that memory location.


that's what mapping an -rwx page at 0x0 does, and as a result it segfaults, which is an access violation.


There is some subtlety about dereferencing a null pointer. Many languages (C, C++, Rust) state that *NULL is undefined behaviour, that is, the compiler can assume that it never happens and optimises based on this. This can lead to a "misoptimised" program that doesn't actually segfault when the source suggests it should.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


No, you are wrong.

First, mapping that page doesn't cause all null pointer dereferences to segfault.

And second, the language doesn't require a segfault. In fact, it explicitly permits the implementation to do whatever it likes.

That is the difference between safe and unsafe. It is in the language definition.


"the language"... which one? AIUI, Go doesn't state that null dereferences are undefined behaviour, but rather that they are guaranteed to panic.


"The language" ?

C doesn't define behaviour of a null deref, but most compilers map a -rwx page there to ensure that attempts to deref fault.

In what circumstance do they not?


The compilers don't map pages there, the operating system does. The problem is the compiler will optimise assuming that a null deref never happens, so you can have source that looks like it should crash due to a null deref, but the compiler has "misoptimised" it to have very different behaviour.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


...I'm talking about throwing an exception or something similar, not getting a segfault. Like in Java.


Programs that crash with null pointer dereferences are not very useful.

Considering that "safe" removes most of the usefulness of the word "safe".


Specifically, the word safe, when referring to type systems means "memory safe". Meaning the compiler or runtime either prevents bad memory accesses by construction or ensures dynamic checks are in place that throw an exception or halt execution in case a bad memory access was about to occur. It means the program isn't accessing uninitialized memory and isn't vulnerable to buffer overflows etc. It doesn't mean your program won't ever crash.

I agree it's better to avoid null dereference errors by not putting null in your language, but by the normal meaning of safe here, go is safe.


It does not mean "memory safe" in general. It means different things in different contexts.

Also, I wasn't making a point about the word "safe". I was making a point that if go is "safe", then the word "safe" is useless.

There is a problem here. Don't want to call it "unsafe"? Ok. Call it crashy, instead.


> It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

I disagree: if the construction can fail, the constructor must return an error, which will be checked; only if the error is nil can the process continue. There shouldn't be logic on the actual data returned to assert whether a constructor worked or not.


I meant something like this:

http://play.golang.org/p/eqnDLVMHGA (pseudocode)


>For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.

If that's so, it's because it's a language that also fights lots of things a modern programmers wants to do/have.


>When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

Actually using channels as a general iterator just for the sake of using the range operator is considered as an anti-pattern. The reason is not performance (although it has a cost), but the risk of leaking producer goroutines. Your example:

    for v := range mt.Walk() {
      if blah {
        break
      }
    }
How will the goroutine writing into the channel returned by mt.Walk know when there are no more consumers which will possibly read from it?

One way out is:

    done := make(chan struct{})
    for v := range mt.Walk(done) {
      if blah {
        break
      }
    }
    close(done) // or defer close(done)
Picking the right cleanup is error-prone.

What about errors? How will mt.Walk tell you that it had to interrupt the iteration because an error happened? Either your channel has a struct field containing your error and your actual value (unfortunately Go lacks tuples or multivalue channels).

Furthermore uncaught panics in the producer goroutine will generate a deadlock, which will be caught by the runtime, but it will halt your process. One way to do it is:

    errChan := make(chan error)
    for v := range mt.Walk(errChan) {
      if blah {
        break
      }
    }
    err := <-errChan
The producer will use the select statement to write both to errChan and your result channel. The success of writing to errChan is a signal for the producer that the consumer exited. However same thing here about relying on the last statement being executed to avoid a leak in case of returns or panics. Here the defer is less nice since you're supposed to do something with the error:

    func Example() (err error) {
      errChan := make(chan error)
      for v := range mt.Walk(errChan) {
        if blah {
          break
        }
      }
      defer func() {
        err = <-errChan
      }()
    }
Next-style methods just pass through the panics, and allow you to handle errors either by having a func Next() (error, value) or with this pattern which moves the pesky error handling outside:

    i := NewIterator()
    for i.Next() {
      item := i.Item()
      ...
    }
    err := i.Error()
First, any panic that happens inside either your code or the generator will bubble through. Second, if you return from your loop body, you will have to provide your own error (the compiler will remind you about your function signature, if in doubt). You can return early if the iterator can be stopped and GCed out (i.e. it doesn't handle goroutines or external resources), otherwise you'd have to call a cleanup as with channels.

The rule of thumb with Go should be that you don't have to do things just because they use some syntactic sugar. After a while you start to think about beauty in terms of properties not about calligraphy.

However, I do see this as a weak point of the language, which hopefully can be solved by education; after all Go is so simple to learn that you might be tempted to make it look even simpler. But the fact that the language has (almost) no magic, it means that you can actually understand what some code does, which imho outweighs the occasional syntactical heaviness or having to learn a few patterns.


I'm not suggesting these are the best patterns, or even 'good' patterns. I am simply drawing a parallel between them and pythonic iterators. When it comes to errors from generator patterns, I've gone between your suggestion and chan<-struct{T;error} when channel based generator patterns are appropriate.


> put anything you want in it, then assert the type of what comes out

... which is exactly what the article mentions and criticizes?


I just spent the weekend learning Go and writing a single writer multi-reader hashtable for threadsafe access. I picked it deliberately because it's against the philosophy of the language, which is to share by communicating instead of sharing data structures directly. It was painful to write:

  // Do NOT reorder this, otherwise parallel lookup could find key but see empty value
  atomic.StoreInt32((*int32)(unsafe.Pointer(uintptr(uint64(slot)+4))), val)
  atomic.StoreInt32((*int32)(unsafe.Pointer(slot)), key)
However, the non volatile, non unsafe parts of the code were an absolute joy. Testing was a joy, compiling was a joy, and benchmarking was a joy. I was impressed that it allowed me to bypass the type system completely and do nasty, nasty things in the pursuit of performance. I want a language that lets me do nasty things where I must, but that makes the other 95% of the program, and the job of compiling, testing, and maintaining that program easy. Go excels here. Rust, C++, Haskell, Scala will never be good at that because they're too damn complicated (although each of them make the nasty parts a little less painful!)

The end result of my weekend's hacking? On an i7-4770K @ 3.5ghz

  BenchmarkGoMapInsert-4  20000000               110 ns/op
  BenchmarkHashIndexInsert-4      100000000               25.6 ns/op
  BenchmarkGoMapLookup-4  50000000                78.5 ns/op
  BenchmarkHashIndexLookup-4      100000000               17.7 ns/op
About 4x faster than Go's builtin map, for int32 key/value on both insert and lookup. And it allows any number of reader threads concurrent access with a writer without any synchronization or blocking, unlike Go maps. It doesn't allow zero keys, unlike go maps, and it doesn't allow deletes. Hardly apples to apples, but the performance of pure Go code is impressive nonetheless. 200 LOC, not counting tests.


> However, the non volatile, non unsafe parts of the code were an absolute joy. Testing was a joy, compiling was a joy, and benchmarking was a joy. I was impressed that it allowed me to bypass the type system completely and do nasty, nasty things in the pursuit of performance. I want a language that lets me do nasty things where I must, but that makes the other 95% of the program, and the job of compiling, testing, and maintaining that program easy. Go excels here. Rust, C++, Haskell, Scala will never be good at that

FWIW, I actually think that Rust positively excels at this sort of isolated low-level work due to explicit `unsafe` blocks. Furthermore, the type system is more expressive meaning the need for this is rarer[1].

In my experience, the rest of the language (i.e. non-`unsafe` things) works very well for maintenance and testing, also in part due to the more expressive typesystem, and things like algebraic data types with exhaustive matches by default (I've done some huge bug-free refactorings to the standard library and compiler, mostly due to the compiler automatically catching all the places that need updating).

On the other hand this comes with the cost of making the "job of compiling" more difficult: the compiler complains about more things.

Re testing: there's unit testing and microbenchmarking built-in: http://doc.rust-lang.org/master/guide-testing.html

[1]: in this case, the type system positively designed with making this sort of concurrency safer.


Rust has `unsafe` blocks in which you're allowed to do all nasty hacks you want.

Rust actually isn't that complicated. Don't get discouraged by comparisons to Haskell — it's still a C-family language where you can play with pointers and mutable state.

To me Rust still feels like a "small" language (similar size as Go or ObjC, not even scratching complexity of C++). It's mostly just functions + structs + enums, but they're more flexible and can be combined to be more powerful than in C.


I think comparisons to Haskell are not too far off the mark. Haskell is not that complicated of a language either. You can do all sorts of complicated things with it, but the language itself is relatively simple. It just has a lot of stuff that you're likely to have never seen before (extensive use of higher-order functions, higher-kinded types, etc), and its type system produces error messages that seem obscure from the outside. Similarly Rust can have some rather obscure error messages that you're probably not going to have seen before during compilation - lifetime specifiers, errors about using things in the wrong contexts, heck, even "Bare str is not a type (what?)"

I'm much more familiar with Haskell than Rust, but having played around with Rust I think they're on a par with each other in terms of difficulty, depending on your background.


[deleted]


> Haskell is a research language; Rust is designed to be a practical language. It makes a lot of concessions to practicality, C-like syntax and imperative control flow being high among them.

Perhaps I wasn't clear in the post you're responding to: my point was that both Rust and Haskell are fairly "simple" programming languages which seem more complicated, because they introduce a lot of features which are likely to be new to those who are using it for the first time. I wasn't really comparing them as languages per se; that's a separate discussion.

> Common in any language, including Go.

Higher-order functions are common in most languages, but not in the way that Haskell does. Most languages use first-class functions as lists of instructions (do some stuff, and perform the steps in this argument). Haskell makes them truly first-class, such that they're positively ubiquitous: an example is currying, which is everywhere in Haskell and rare in most other languages; another example is monads, which are obviously a core part of Haskell and which require first-class functions (e.g. in >>=) to do anything useful. There are other examples.

> Rust does not have these.

I know; I was speaking about Haskell.

> This is because we want memory safety without performance tradeoffs (global concurrent garbage collection).

Right; it's a perfectly understandable thing to have, but it's not something that (to my knowledge) exists in any other mainstream language. It's an example of something in Rust which is obscure to newcomers.

> Could you elaborate?

I'd have to write some code and run the compiler to get the actual error message, but I recall getting errors about using some reference outside of a context or something. In my fuzzy recollection, it would be something like where I had written `match foo { a => b; c => d}` and I would get some error message which would be fixed by writing `let foo1 = foo; match foo1 {a => b; c => d}`. Unfortunately I don't remember the specifics, but long story short: compiling Rust code produces a lot of very strange error messages to someone unfamiliar with the language. :) In this way it's not dissimilar from Haskell.

> This is because dynamically-sized types are not yet implemented, but they will be for 1.0.

Great to know! If only there were a more helpful error message than "Bare str is not a type." :)


> In my fuzzy recollection, it would be something like where I had written `match foo { a => b; c => d}` and I would get some error message which would be fixed by writing `let foo1 = foo; match foo1 {a => b; c => d}`. Unfortunately I don't remember the specifics, but long story short: compiling Rust code produces a lot of very strange error messages to someone unfamiliar with the language.

Ah, sounds like you were using a value after it the destructor on it ran, which was fixed by moving it to a separate variable (so that the destructor ran later). This kind of error is familiar to C++ programmers, so I wouldn't say it's unique to Rust, although it's a runtime error (actually, undefined behavior) in C++ and not in Rust. A good static analysis package for C++ would emit the same error that the Rust compiler did.


Yeah that's probably right. And yes for sure I'm not only talking about things which are unique to these languages, but just things for which there might be large groups of users who are not familiar with them. Similarly, users of ML (or category theoreticians) are going to be less confused by many of Haskell's idiosyncrasies. :)


I love Rust from a theoretical standpoint, it's more beautiful than Go. But Go is more practical, when it comes to getting things done with a team, it just makes more sense (maybe not for every case!)


> Rust, C++, Haskell, Scala will never be good at that because they're too damn complicated (although each of them make the nasty parts a little less painful!)

Why is Rust too complicated to allow you to do low-level hacking?


I don't think it's bad, but to me(and only to me), disappointing. I'm a long time mozilla fan...heck a Netscape fan really. Go was a pleasure to learn, there were no 'gotchas' initially...just a small, easy to reason about language. Rust...with its arrows, angle brackets, pattern matching, etc. seemed just so complex to fit in my brain. I'd love to be proven wrong and try again, but the docs aren't the best. And I know Rust isn't 'released' yet. But if you are defending it, I feel like it should be at a very learnable state. Do you have plans for a 'tour', for a play by play like Go offers that can bring others up to speed? Further(and this is really reaching while I got you)... my favorite part is cross compilation and being able to deploy a single binary anywhere. I tried to compile something in rustc once, but when I moved it to an older box, it failed to run. Will rust offer such stable static compilation in the future?


>Rust...with its arrows, angle brackets, pattern matching, etc. seemed just so complex to fit in my brain.

It's 2014 already. Angle brackets, arrows, and pattern matching are 1990 level language technology.

Heck, even a dynamic front-end language like Coffescript has these kind of things nowadays.


I'm not saying you're wrong nor will I argue about it as in the end it really comes down to taste I feel. To me, and perhaps to others, these things make a language harder to read/use, regardless of how pervasive...


While some of it can come to taste, I think having higher abstractions (as long as they don't leak) is beneficial to reading/using a language. Like, objectively.

And foreach vs for is not a leaky abstraction. Nor is pattern matching vs traditional checking and extracting the values. So their value is not geting lost on having corner cases that the equivalent "based on primitives" code wouldn't have. If anything they make the operation even more explicit.

So I think a lot of it comes down to getting familiar with them.

While something made of "first principles" might be more instantly familiar, it will be harder to reason about because of its low-level ness as the code size increases.

Case in point, assembly. It's quite primitive language with very few constructs, so it's easy to learn to use. But an actual high level "if" or function call is much easier to grasp at once than checking 10-20 lines that implement the same thing in assembly (even if you have trained yourself to see the assembly "pattern" for a function call, etc).


To expand on this: isn't what Go does with built-in channel and parallelism primitives similar to that?

They sure didn't exist in C. But people see the value of having this higher level abstractions.

So, why draw the line on Channels and Goroutines, and not add pattern matching in there? Just because Go already came out offering the former?

If anything, pattern matching is even more applicable to the work we do day-to-day, than channels and goroutines are.

Millions of people write programs without parallelism/concurrency every day, but nobody writes programs without pattern matching. They just do it by hand with lots of if/else and various manual extraction techniques, if their language doesn't offer it.


> the docs aren't the best.

This week is week two of my being contracted by Mozilla to write docs full time. First up, a new tutorial. You can see my work from last week http://doc.rust-lang.org/guide.html , and my first task after I finish breakfast is to clean up https://github.com/rust-lang/rust/pull/15229 , which got some review over the weekend.

So, you're right (at least about the docs) but I'm on it.


Steve - thanks for your reply and work on the docs. Is there currently a place - or will there be one, that describes precisely build/deployment options? This really in the end is the most important thing to me. I've searched unsuccessfully for this wrt Rust. It seems that as it sits, items built on a newer machine won't run on an older machine due to libc library mismatches. Will this always be a potential issue? Or will I be able to generate a single binary a la go?


Well, first of all, I'd point to you http://arewewebyet.com/ ;)

Different linking options are found here: http://static.rust-lang.org/doc/master/rust.html#linkage

Basically, as of right now, when you link statically, Rust will not build in glibc (and jemalloc, IIRC). So, you'll need to make sure that your glibc versions line up. My understanding is that glibc isn't able to be statically linked in without breaking things.

You can use `objdump -T` to see these dependencies. On my system, compiling 'Hello world,' I get symbols for glibc and gcc.

(Go gets away with this by reimplementing the world, rather than relying on glibc. The benefit is a wholly-contained binary, as you've seen. The downside is compatibility bugs, like https://code.google.com/p/go/issues/detail?id=1435)


I mean too complicated for writing, reading, and maintaining and generally just working with from day to day. It costs too much time to do the same thing.


Again, what specifically?

I ask because this has not been my experience writing several hundreds of thousands of lines of Rust code, nor has this been the experience of anyone I have helped get up to speed. Moreover, I believe there is no feature in Rust that is not necessary to achieve safety without sacrificing performance.


Hundreds of thousands of lines of Rust? Out of curiosity, what kind of software are you writing in Rust?

...

And can I get in on some of that? ;-)


The compiler and Mozilla's experimental rending engine Servo:

- https://github.com/rust-lang/rust/

- https://github.com/mozilla/servo/

(The 'contributors' graphs suggest that pcwalton has added/removed more than 1 million lines of code in those two repos combined.)


> Hundreds of thousands of lines of Rust

Just a very concurrent browser engine ;) One that does everything in parallel. Basically every task that can be parallelized becomes parallel (rendering, JS execution, CSS matcher, etc.).

Also you can, there is Github for Servo (said parallel browser engine project by Mozilla).

https://github.com/mozilla/servo/wiki/Design


The Rust compiler :) He is part of the Rust team and writes Servo.


Can you be any more vague?


it allows any number of reader threads concurrent access without any synchronization or blocking, unlike Go maps.

You mean readers concurrent with a writer? AFAIK, any number of reader threads can get values out of a Go map when it isn't being written to.


Yes, concurrent with a writer. If there is more than one writer, the writers must use a mutex, but the single writer and readers don't block each other. I updated the text to make it clear.


That map only works with keys and values both being int32, right?


Correct. I have to duplicate much of the code for in64 keys and values, the lack of generics hurts there.


You seem to be brushing this off as a minor nuisance, when in many cases it is a showstopper.

Need it for floats, duplicate it again.

This is a solved problem -- the fact Go doesn't have the solution reflects very poorly on Go.


There are some batshit crazy "solutions" offered from Go fans, like "no biggie, just use a templating engine to generate the code for the types you want and compile it".


I actually use that solution in c (we have some reasons to avoid c++).

Of course it sucks, but at least c has the excuse of being from the 70's.


They suffer from a strange distortion field effect.


Ironically that is the solution c++ essentially uses.


Not quite, given that C++ templates do higher-level type-checking than just expanding and checking.

Also, they support higher-order kinds (a template parameterized by a template). And specialization (specifying special cases). And automatic instantiation based on the static types at the call site. Etc.


Its not like you will be creating a collections library all the time you create some program... yes generics are cool for code reuse.. but i think some people just overreact, turning it into something that will make the language unusable or that you will have to copy-and-paste just because of the lack of generics.. this is far from truth..


Reading this article and the HN comments made me realize, how people want to use the same language for everything. Unfortunately, this is not possible, since each language was design with certain use cases in mind. I too, am guilty of wanting a language to do everything, to be fast, memory efficient and also easy to program in.

Perhaps our ultimate quest in terms of designing languages is to design one smart enough that can be used to program toasters and clusters alike. Until then, we might as well use the right tool for the job.


> to be fast, memory efficient and also easy to program in.

I think the issue Go detractors have is that it is none of those things, nor is there any pair of those things for which there is no better alternative than Go. If you want fast and memory efficient, you could pick C++ or C. If you want something a little easier to program in, you could pick C#, which dominates Go in all three categories. If you want to go easier to program in, there are plenty of languages like Python that are more powerful than Go.


I'd make the case for Scala for almost anything (nothing so low-level that you need to avoid GC, and I guess not command-line utilities (JVM startup time), but other than that). Haskell or OCaml could probably make a case for being suitable for just about anything. There are big advantages to using a single language in terms of code reuse, deployment tooling and so on.


This was a good read. Can anyone comment on whether they find the problems outlined in the article to really be painful in day-to-day go development?

From my initial dabblings with the language, it feels like its constraints may not actually be a big deal in practice, and may even be more of a help than a hindrance in large projects. It would be nice to get some commentary from more experienced go users.


I chose between Go and Haskell for a project some time around 2012. I was a beginner to both, but came from a background of imperative languages (C, C++, Java, etc.)

Initially I felt the same as you: Go was much easier to get things done in, and I could be reasonably productive quite quickly (moreso than Haskell, which I found very difficult to learn).

However, after some time I found many of the same problems mentioned in this article. Particularly, in many cases I had to fall back to the kind of nasty unsafe code mentioned in this article (like using interface{}). Often, I felt that my code was needlessly verbose. I would frequently write code and feel that the language was preventing me from doing what I wanted directly. Ironically, this is exactly how I felt with Haskell at first (not anymore).

Ultimately, I ended up switching to Haskell, and although it was significantly harder to learn, I felt like it has a lot more flexibility, safety, and importantly lends a clarity to thinking when designing a program.


This sounds strikingly similar to my experience! Though my imperative language experience was mostly with dynamic and/or scripting languages aside from C#.


In practice, Go has caused me less frustration than any other language I've used. I feel like the author's complaints here aren't really grounded in much experience, or maybe he's trying to use the wrong tool for the job.

The author's conclusion:

  · Go doesn't really do anything new.
  · Go isn't well-designed from the ground up. 
  · Go is a regression from other modern programming languages.
is hardly sustainable. Go was production-ready in 2011 with a stable version 1.0. It has a surprisingly mature tool chain and vibrant community. Go cross-compiles from my 64-bit Mac to a 32-bit Raspberry Pi or ARM Android phone on a whim. I can deploy my app by copying a single, self-contained binary. Tell me again that Go does nothing new for us.

Go makes concurrent programming safe and easy (with a nice syntax) -- something that we frankly should have done 30-40 years ago when we first started thinking about multiprocessing. Go was invented by folks like Ken Thompson (who created Unix) and Rob Pike (who created the Plan 9 operating system and co-created UTF-8). Tell me again that there isn't good engineering behind Go.

Finally, Go attacks the needs of modern programming from a different paradigm than we have been using for the last 10-20 years. From the first paragraph of Effective Go:

> ... thinking about the problem from a Go perspective could produce a successful but quite different program. In other words, to write Go well, it's important to understand its properties and idioms.

So of course it's different than a lot of other aged languages. Go tackles newer problems in a newer way. Tell me again that Go is a regression from other programming languages.


Ok, I'll point out again that it emphasizes both concurrency and mutability which is a match made in hell and has a type system that's constantly subverted by null pointers and casts to interface which drastically reduce safety. It has a static type system released in the 2010s that doesn't have generics and deploying static binaries is not a new technology.


> Tell me again that Go does nothing new for us

None of the things you mentioned are new.

> Go makes concurrent programming safe and easy

Mutability & concurrency, nils, interface casts -- these things all go against safe.

> Tell me again that there isn't good engineering behind Go.

You seem to think that a language that has baked in syntax for concurrency, or that has famous people behind it necessarily has "good engineering" behind it. I don't understand how one leads to the other.

When so many mistakes and regressions go into a language, one shouldn't care that famous names are behind it.

> Go tackles newer problems in a newer way

Go is essentially Algol 69 with baked in concurrency syntax.

> Tell me again that Go is a regression from other programming languages

Losing null safety, sum types & pattern matching, parameteric polymorphism and type-classes, all form a huge regression in PL design from the state of the art.


You're thinking wrong. You're also proving the grandparent's point.

You're thinking in terms of "here's this set of bullet point features that I think a language has to have to be a proper, modern language." But the grandparent was asking you to consider that a different set of features might have value form some real-world problems that Go's authors had really bumped into. You reply, "Nope, couldn't have - it doesn't have my bullet point features!"

There are more things in programming than are dreamt of in your philosophy of what a programming language should be.


He said Go had something new to offer, and listed old things.

He said Go makes concurrency safe & easy, when Go emphasizes features that contradict safety and ease.

He said Go tackled problems in a newer way, when Go is really Algol69 + coroutines.

He denied Go regressing from other languages, when it throws away decades of PL research.

In none of this did he say "Here's an alternate set of features that ...". No. He said concrete, specific, wrong things.

What you are saying is a different thing -- and I also disagree with you.

These "bullet list" features weren't invented for the lols. They were created to solve real problems. Problems that Go doesn't have an alternate solution to.

Go programs dereference null. That is a bad thing. Languages with my "bullet point features" do not dereference null.

Go programs duplicate code to handle different types. Ditto.

Go programs can (and thus do!) mutate some data after it was sent to a concurrent goroutine. Ditto.

Go programs can cast down to incorrect types. Ditto.

The "bullet point features" are important. There are alternatives, but Go doesn't sport any of them.


> Tell me again that Go does nothing new for us.

I do agree with the author here: Go the language does nothing new. Go the platform, on the other hand, is a really pleasant new experience when compared with other languages.

The language is a regression in features compared to what other languages can do, but that is totally understandable when you look at what Go is aimed at.


Hmm, so some people are bent out of shape because of feature regression and over the fact that Go doesn't have the newest, shiniest gadgets. However, fans keep saying that their overall experience is great. Reminds me of something else...


The newest, shiniest gadgets? Like generics? You're kidding right?


The newest, shiniest gadgets? Like generics? You're kidding right?

Like putting words into people's mouths as a discussion tactic? You're kidding, right? So it also lacks some not-new "old dependable" gadgets as well. So what? I don't see what point you're making. Did you write that comment just to overlay in indefensible position on me?

As an exercise: Name a language feature Go doesn't have that's newer than generics.


The newest, shiniest gadgets? Like generics? You're kidding right?

Like putting words into people's mouths as a discussion tactic? You're kidding, right? So it also lacks some not-new "old dependable" gadgets as well. So what?


Go packs together a lot of nice things that previously existed in other languages. It still has room for improvement though, as IMHO the language is pretty basic ATM - but exhaustive enough to cover most needs (whether they require stuff like generics or not) in a very painless way.

I love Go, because it fits in my head.


> it fits in my head.

This. I love Go's simplicity. Coming back to Go code I wrote months ago, I can immediately understand what it does virtually every time, which required lots of discipline I didn't always have in other languages. Lots of languages have obscure corners that allow you to do really cool things that aren't obvious, but for the most part, Go doesn't have these; what you see is what's happening.

Are there things that would make Go a better language? Sure! Should the type system be improved? Yup! One thing that makes me cringe is when I open up library code and see interface{} and calls to the reflection package all over the place, but general solutions often require that in Go, and that's a problem. In practice, though, this is almost a feature: if you see that stuff in code you're reading, it's a giant red flag that this code is tricky and possibly slow, and care is needed.

Edit: speeling


> I can deploy my app by copying a single, self-contained binary.

I don't think golang invented static linking. ;)


As much as I like Go, in many ways Go is not what I would consider "beautiful" in an esthetic sense.

Go does, however, have a very pragmatic feel to it. The creators, in general, seem to take a very measured look at things before adding them, and are very careful to keep the compatibility promise for 1.x. The overall result feels very "engineered" (especially when using the tooling).

Go clearly isn't perfect, but yet it feels rather robust for such a young language.


I'm an experienced Go user. I'm also a lover of Haskell and Hindley Milner type systems. and in practice these complaints are not that big of a deal. Generics may or may not get added in the future but in practice you can go a long way with just slices and maps.

And while the Hindley Milner type system is a wonder to behold and I love working in languages that have them sometimes those same languages introduce a non-trivial amount of friction to development.

Go's single best feature and the one around which almost every decision in the languages is centered is an almost total lack of developer friction. If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.

[EDIT: some wording was incorrect]


If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.

I suspect this is where many philosophical differences in these discussions originate. I appreciate the value of having quick and easy tools, but for production software where I care about quality, I don't want the language to get out of my way if I'm doing something silly, like treating data as if it's a different type to what it really is or assuming there is data to work with at all if a null value is a possibility. The web is plagued by security problems, so it seems odd to me that anyone would promote a new language for writing web-based software that retains obvious and entirely avoidable ways to introduce vulnerabilities.


I doubt that null pointers lead to security vulnerabilities. Panics, yes; worse performance, yes; vulnerabilities, unlikely. Null pointers are not dangling or wild pointers, which are the problematic ones.


I suppose that depends on how broadly you define a security vulnerability. Almost anything that can crash a process on a server -- either literally at OS level or figuratively by requiring something to reset itself before it can continue to do its job -- is probably a DoS attack waiting to happen. That might or might not be as dangerous as something like a remote root vulnerability, but I would argue that it is a serious security issue just the same.

(I'm sure it also goes without saying that having code that can crash with a the equivalent of a null pointer dereference is still highly undesirable in a public-facing web server, even if it doesn't actually risk things like data leakage/loss.)


Interesting response... Do the multiple people downvoting not realise that even in Go a null dereference might trigger some sort of automatic reset in your server process (if the unexpected panic is only recovered by some high-level generic error handling logic) or even crash the program entirely (if you didn't have anything up the stack that recovered at all)? Or maybe not realise how these kinds of behaviours might allow denial of service attacks?


What if e.g. a programmer calls some normalize function on data before hashing it? Then if the normalize function sometimes returns null and the programmer hasn't handled this case, an attacker could use this to generate collisions.


That particular problem is not specific to null pointers. It's specific to normalizing functions.


You might want to google dereference null pointer code execution. Go regresses language design, because it allows constructs that have been proven to fail and are already fixed in other languages.

This has nothing to do with shiny features of the newest language or whether language A or B is someone's favorite. This has to do with program correctness.


Those have to do with the undefined nature of null pointer dereference in C. In Go, accessing a null pointer is guaranteed to produce a panic on every architecture.


You might want to google pcwalton, just sayin'.


>Go's single best feature and the one around which almost every decision in the languages is centered is an almost total lack of developer friction. If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.

Unless you want to do generics. Of extend the language to have custom operators, for things like scientific computing. Or tons of other things.


The generics thing turns out to not really be a problem. Yes, writing 100% Abstract Data Types is unsatisfying. However, when I'm actually writing custom data structures, I generally have an interface that I want to use because I want the data structure to take advantage of a peculiarity of the data being stored.

But, you know, bad for a teaching language.


>can anyone comment on whether they find the problems outlined in the article to really be painful in day-to-day go development?

Author here. Depends on what I'm doing. For most simple programs, the things I listed in the article don't really get in the way. So writing my web server in Go was not that bad at all. It was pretty good, in fact.

It's when I start making larger programs that I start feeling the constraints of the things I listed in the article. Having a strong, capable type system really helps me keep track of large projects.


Been writing Go at least weekly for a while now - no, none of these things are painful in daily usage - they seem like items that are painful in theory but not so much in everyday use, especially in the realm it is intended for.


I've been using Go in a large web application for a few months now. I only skimmed the article, but I didn't notice anything that's caused me problems. I quite enjoy working with it, although I like the idea of adding generics (not that I've needed them yet)/operator overloading/range extension.


I would not say that I am particularly experienced, but I will tell my point of view from a not particularly sophisticated programmer. I am interested in seeing what objections others may have to what I write.

Compared to other popular programming languages aimed at web programming, such as Python, Ruby and PHP, Go provides more type safety. Comparable to Java's but for less code.

Go's runtime and development tools are very lightweight, which is important to me as I use multiple, often dated computers with limited memory.

It is very easy to learn, a low investment. This means that it is conceivable for a student previously only exposed to Java to get on board on your software project on short-time notice.


Python may be popular for web development, but that's not even it's most dominant field. I haven't even heard of any clear winner.


I am completely in support of Haskell and functional languages in general. There are some gaps in Go and definitely some glaring problems. But this comparison also only lists the bad. Go is good for what it was intended for which is concurrent programming and server/web application.

Just a note: I don't think it is fair to say Go has absolutely no immutability as it was defined, it does have "const". See http://golang.org/ref/spec#Constant_expressions


>Go is good for what it was intended for which is concurrent programming and server/web application.

I absolutely agree that Go is nice for writing web servers! However, there is no reason that it can't still be nice for that and also a well-designed language in general.

For example, Haskell has awesome concurrency features, and writing a web server in Haskell is reasonably nice (not quite as nice as in Go, IMO).


`const` is for compile-time constants where something like `let` in Rust can be used for values generated on the fly. The latter is immensely more powerful and actually provides the described benefits like guaranteeing that data is not changing under your feet.

Edit: While `const PI = 3.1415…` is threadsafe, it's not very useful in comparison to runtime-immutability :)


Yes, the guarantees provided by true immutability are not met by "const". I wrote the note loosely using the author's expectations and advantages of immutability, not the exact definition. But thank you for letting me clarify.

Also I just noticed with Go's Unicode support we can write `const π = 3.14..` if we really wanted.


Yeah, this post seems to be most about general purpose language features, and not so much about stuff that is relevant to the niche Go is trying to carve for itself.


So he wants Haskell. Haskell already exists and has all the features he wants. He should have written his blog in Haskell, but he didn't, and I know why: because a language, which throws all these features together is no longer a practical language. He only sees the benefits of features, not the cost they introduce.


Could you elaborate on the costs of the features mentioned about Haskell?

As just one example, ADTs in Haskell are implemented in an extremely efficient fashion.

Type classes are implemented in the same way that Go's interfaces are implemented (basically the same way as vtables are in C++).

The immutability features (const) are compile-time checks, and can even help the compiler be more efficient.

"if as an expression" has no real cost.


>He should have written his blog in Haskell, but he didn't, and I know why: because a language, which throws all these features together is no longer a practical language.

Actually, it's because I like Go's standard library HTTP server implementation!

I'm not claiming that Haskell or Rust are magic bullets, or that Go is useless. Far from it!


Hindley-Milner inferrence, operators-as-functions, and certain other features, I guess I can see arguments against (although I think they're overstated). However, I don't see any reason why generics, no null pointers, pattern matching, and certain other features would be problematic, let alone not be useful, in a language like Go. I'd be curious as to hear your justification of the claim that "a language, which throws all these features together is no longer a practical language."


I do think that Meijer and Griesemer agree somewhere in this intervew that generics are hard to get right (co and contravariance) and that forgoing generics is a valid design choice: http://channel9.msdn.com/Blogs/Charles/Erik-Meijer-and-Rober... It's been a while and might have been another interview tough.


I think D got generics right.


Haskell powers quite a few blogs and websites.

The claim that it is "no longer a practical language" is as silly as the "Real World" fallacy.


> A Good Solution: Constraint Based Generics and Parametric Polymorphism

> A Good Solution: Operators are Functions

> A Good Solution: Algebraic Types and Type-safe Failure

> A Good Solution: Pattern Matching and Compound Expressions

People have tried this approach. See languages like C++ and Scala, with hundreds of features and language specification that run into the thousands of pages.

For an unintentional parody of this way of thinking, see Martin Odersky's "Scala levels": http://www.scala-lang.org/old/node/8610

For additional hilarity, note that it is an undecidable problem whether a given C++ program will compile or not. http://stackoverflow.com/questions/189172/c-templates-turing...

--

Go was created by the forefathers of C and Unix. They left out all of those features on purpose. Not unlike the original C or the original Unix, Go is "as simple as possible, but no simpler".

Go's feature set is not merely a subset of other langages. It also has canonical solutions to important practical problems that most other languages leave do not solve out of the box:

* Testing

* Documentation

* Sharing code and specifying dependencies

* Formatting

* Cross compiling

Go's feature set is small but carefully chosen. I've found it to be productive and a joy to work with.


You seem completely ignorant of the things you're attempting to talk about. Scala doesn't have "hundreds' of features, nor is the language specification thousands of pages. It's just an outright fabrication to say so.

>Go was created by the forefathers of C and Unix.

Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

>They left out all of those features on purpose

Did they? I don't believe this is the case, as I've heard from the creators many times that they want to add generics but haven't figured out the details yet.

Are you really going to sit here and argue that static typing is important EXCEPT for when working with collection? That parametric polymorphism doesn't make things simpler?


> Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

More than thirty years (at the time it was released), the first language with "modern" generics was ML in 1973.


>See languages like C++ and Scala

Of the 4 you mentioned (Constraint based generics and parametric polymorphism, operators as functions, algebraic types and type-safe failures, and pattern matching/compound expression) C++ really only has 1 (operators as functions).

>with hundreds of features and language specification that run into the thousands of pages.

This describes neither Rust nor Haskell.

>Go is "as simple as possible, but no simpler".

It has mandatory heap usage, garbage collection, green threads. It's more than generous to call that "as simple as possible".

Of the 5 features you mention that Go has "canonical solutions" to (in the form of external tools), I know off the top of my head that Haskell's Cabal takes care of at least 4 of them. I'm not sure about formatting. Rust probably has similar tools, or if it doesn't, they can certainly be added without changing the language.


> Rust probably has similar tools

* Testing

Built-in: http://doc.rust-lang.org/master/guide-testing.html

* Documentation

Built-in: http://doc.rust-lang.org/master/rustdoc.html

* Sharing code and specifying dependencies

The newly released 'cargo': http://crates.io/ https://github.com/rust-lang/cargo/ (alpha, but quickly improving). This will be Rust's cabal equivalent, almost certainly with support for generating documentation and cross-compiling (it already has basic support for running the tests described above).

* Formatting

Missing at the moment, but very wanted: https://github.com/rust-lang/rust/issues/3195 .

(Well, to be precise, the compiler has the '--pretty normal' option, but it's not so good. https://github.com/pcwalton/rustfmt is the work-in-progress replacement.)

* Cross compiling

Already supported, although it requires manually compiling Rust with the appropriate --target flag passed to ./configure, to cross-compile the standard library.


I would be very wary about promoting Cargo as a 'cabal equivalent'. :P


To be clear, I just meant in the context of

> I know off the top of my head that Haskell's Cabal takes care of at least 4 of them

from the post I was replying to.


Perhaps Opam equivalent? I've been using Opam for two years in OCaml, and it just works (tm).


The Scala specification is two hundred and something pages, around a third the length of the Java specification (largely because Scala has, in some sense, fewer features than Java, in the sense that Java has lots of edge cases with their own special handling, whereas Scala has a smaller number of general-purpose features. The complexity comes because it's easy to use all of them at once)


I get what you are saying, but why convolute C++ and Scala? One is a horribly designed complex language with macro-like templates rather than modular parametric polymorphism. The other is much more well designed but, as you say, still complicated. You could have stayed at Scala without degrading into a comparison with C++, which is a universally kicked dog anyways.

It is possible to do type parameters in a way that is simple yet effective. But I can understand why it wasn't done this early in Go's lifetime, especially since Rob Pike isn't exactly well into generics (vs. Odersky's experience with Java/Pizza).


> Go was created by the forefathers of C and Unix. They left out all of those features on purpose.

The null pointer was created by Tony Hoare. He later thought that that was a mistake.

Yet here we are again, in the new millenium. Coming up with new languages with null/nil pointers in them.


This article presumes that everybody wants an elaborate type system. I'm not sure that is the case. I still see an elaborate type system as incidental complexity. I may be in the minority and I may not have worked in domains which benefit from such modeling. Maybe I'm stuck in a blub paradigm.

Here's my reasoning. I'm a fan of human language & domain ontologies. Word definitions are quite flexible & do not have an elaborate type system. I don't feel the need to have a provably correct logical system to have a useful conceptual tool (i.e. analogies). I enjoy ambiguity. Ambiguity can lead to paradoxes, which in turn leads to exploration & novelty.

Strongly typed systems, by default, give me the impression that the domain ontology is figured out on a highly precise level. That is never the case. You can almost always go deeper. Domain language precision is tough to model & express.

I prefer data structures to drive operations. I suppose that a schema is often useful, however I don't feel like I need the programming language to enforce the schema.

I also like to evolve the design, using tests. Tests are really examples to exercise the program's API with expected I/O.

People often equate an evolved programming language/paradigm as being better. In the case of Javascript, they point out that is was created in a few days as evidence that it's "bad". The thing about an evolved language/paradigm is it has evolved down a certain path. That often means restricting the freedom of the programmer to evolve the program down another path. I'm not picking one way to be better than another. However, I do see tradoffs to both approaches. I personally prefer a more flexible language. It can be evolved, as long as the evolution does not restrict my freedom to evolve the program.


I recommend Dijkstra's paper: "On the foolishness of "natural language programming" [1].

It explains how natural language is a very poor way to express programs, and how it held back science and progress for many centuries.

Strong types describe your code precisely. If your code doesn't match the model yet, that's fine.

But your code has invariants in it. Things like: "this variable can never be nil, that variable can". These invariants can relatively easily be encoded as static types. Not doing so is just throwing away safety and documentation for virtually no benefit.

Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them. And then when you break those invariants, the compiler is your friend! He helps you go and find all the other pieces of code that relied on the broken invariant, so you can fix them.

You say you prefer a more flexible language: But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.

Also, the inability to specify invariants of your program isn't flexibility. The ability to specify them or opt out of specifying them, which is what strong type systems give you, is.

1. https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...


> It explains how natural language is a very poor way to express programs, and how it held back science and progress for many centuries.

I'm not interested in using natural language to implement the software (write code). I'm interested in using natural/technical language to create an ontology for the architecture. This is where things get gray.

When I think of an architecture, I think of something that evolves over time. I think of architecture as a tool to facilitate communication between programmers, designers, project management, domain experts, & users.

This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.

I see this fuzziness as an accurate model of the conceptual domain, which is ultimately based on the understanding of multiple humans. This understand is fuzzy and heavily dependent on context. And yet, the ontology attempts to coral this fuzziness into more strongly defined concepts, which map to the implementation. The implementation should not be fuzzy at all.

> Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them.

I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect. When there is such proof, I correct the system. In environments that support rapid feedback & deployment, like the web, this works well. In environments that don't support rapid deployment, iteration, and where lives are at stake, not so well.

> But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.

That sounds good to me.

> Also, the inability to specify invariants of your program isn't flexibility.

That mostly sounds good. I would want invariants to be optional, which sounds like is the case.


> I usually use guard clauses to protect against nulls.

An interesting (to me) insight was that in a language with a flexible type system, the types are effectively just a set of assertions at the start and return of every function, that say that the inputs and outputs have certain properties. With a compact syntax and zero runtime overhead, which is nice.

I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo(). But once I started seeing the type system not as a fixed piece of the language but as a framework for encoding my own assertions, it became useful enough that I can't stand to live without it.


It's much more than just at the start end return of every function. But OTOH, it's much less than assertions, since they're restricted to a subset that can be proven (usually automatically).

Note the Scala verbosity here is Scala-specific. In HM-style languages, type inference works much better and you don't have to do such things. You might still have to explicitly "lift" a value from one type to another (e.g: Wrap a value in "Just", or use "lift"), but that's a much more minor price to pay.


Lifting in Scala is not at all verbose; I'm talking about casting, calling a method that the type system doesn't know is present.


> I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo()

Have to make an explicit cast with a structural type? Surely you can do better, like, say, a trait.

> I'm talking about casting, calling a method that the type system doesn't know is present.

I think a larger example is necessary to see how you ended up in such a situation, but I suppose it's off-topic...


> Have to make an explicit cast with a structural type? Surely you can do better, like, say, a trait.

Usually, yes. But the only way that's as general as the Python line is to use the structural type.


> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.

So what does this have to do with Go or type systems?

> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect.

And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?

> That mostly sounds good. I would want invariants to be optional, which sounds like is the case.

Every good type system lets you "opt out".

Therefore, it is a bit silly to look at dynamic typing, where you cannot "opt in", as more flexible.


> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes. > So what does this have to do with Go or type systems?

I've found that type systems, that aren't utilizing Duck Typing, as being restrictive & causing incidental complexity when evolving the design. I don't really care if something is a categorization of something else. I usually (> 98% of the time) only care if that something adheres to an interface.

I don't like to label people in life either :-)

> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect. >And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?

For the web, it's not that bad. The design is constantly evolving, so most of the development period is spent completing something that is not finished.

There's no downside in encoding nullability, unless extra syntax & incidental complexity is added. It's not a big problem for me so I'd rather not have to do extra work for this feature.

> That mostly sounds good. I would want invariants to be optional, which sounds like is the case. > Every good type system lets you "opt out".

Explicitly or implicitly? I'd rather be opted out by default and opt in when I want to. Again, I don't want to do extra work or have incidental complexity.


> I usually (> 98% of the time) only care if that something adheres to an interface.

Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?

> There's no downside in encoding nullability, unless extra syntax & incidental complexity is added

You just need sum types and pattern-matching, which are a straight-forward addition to the language -- and very fundamental to computation, so not quite "incidental complexity".

> I'd rather be opted out by default and opt in when I want to

Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".

Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time.


> I usually (> 98% of the time) only care if that something adheres to an interface.

>Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?

True. Though we are discussing nil/null as being a potential state of data. I actually like & utilize Javascript's notion of falsy (false, "", undefined, null, 0). It's not precise, but most of the time, precision is not needed. Just the general notion that there is a value to operate on or not. Optimizing toward brevity supersedes precision in many cases.

> I'd rather be opted out by default and opt in when I want to

> Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".

I mostly do use dynamic languages. Though, the notion of the Go interface makes the api explicit, yet remains decoupled from the rest of the type system, which seems ok. I've seen proponents use these interfaces to later "discover" types.

> Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time. I would rather opt in if I would otherwise have to think about it every time the situation comes up.

Take a collection as an example. Most of the time, I really just want to put a bunch of objects (data) into the collection. I don't want to be a bookkeeper of what type of data is going into the collection. I trust that the data "works" with the rest of the program and will utilize other mechanisms to prove that it doesn't work. I don't want to have to fight compile errors and have to craft a type system just to put an object into a collection.

---

As a general notion, I like to evolve the design from a simple understanding to a more precise & intricate understanding. My ideal programming language would be forgiving of my initial simplistic domain understanding and facilitate the growth of precision as time goes onward.


This is a good argument in favor of dynamic typing, but Go is not dynamically typed. It has a static type system that its critics consider insufficiently powerful.


Ambiguity is not something that's generally, if ever, desirable in a program, so if a simpler type system has that to offer, I'm not sure it's a benefit. You could also argue that a more elaborate type system allows you to be more expressive in the design of your programs, rather than restricting you.


For robots, Haskell is not used directly, but can be used as a DSL to generate the appropriate C code. Here's one example: http://smaccmpilot.org/languages/index.html


True! Embedded programming DSLs are a very cool area of research. I figured they might be a bit beyond the scope of the article.



Regarding generic data structures, the author should consult the sort package, which has typed generics; much the same approach can be used for generic data structures.

More complex type inference requires a more complex (and hence buggier) compiler.

Finally, the author should investigate the unsafe package; I believe the following code will do what he wants:

  *(*byte)(unsafe.Pointer(uintptr(0x1234))) = 0xFF
Verbose? Sure.


One man's "must haves" are other man's "cruft".


I also don't get why the for-loop uses a "range" keyword at all, isn't that what typing is for, can't it just figure out that the type is enumerable?

I like most of Go so far, but interface{} is possibly the ugliest artifact of a programming language that I've seen, next to pretty much all of c++

Rust and Go should have a baby


There is a subtle yet key difference between range and for. Ranges will only start execution on input of known size and guarantee termination. Regular for loops do not require a terminating state.


> Ranges will only start execution on input of known size and guarantee termination.

`range` can be used to receive values on a channel, which is certainly not a known size and doesn't have guaranteed termination.


May be D will interest you.


Go is designed to be a niche language. A very big niche, but essentially still a niche: service components in clouds, where computing efficiency financially trumps development times.

Go is more productive than C++, but less so than Python or some other alternatives. Go's tooling, linking and libraries make it useful in the Cloud, less so on mobile or personal computers. And the lack of third party libraries, combined with a relatively slower speed of developing these libraries (again compared to Python, javascript and others) means that Go will have a hard time going beyond cloud services.


To be honest, I feel a lot of people are missing the point of Go, and I think none of the points made are important. The single most important thing about Go is it's simplicity:

For me it is a language that feels like a hybrid between a scripting language and a 'real' programming language. Simple syntax with some powerful, easy to use features, impressive library support for being only a few years old, but compiles (static) to native code.

That fills a gap that Haskell and Rust don't, These more advanced languages sacrifice simplicity for an attempt to be perfect. Go makes the clear statement of being simple above everything else.

Give an average python/ruby/<insert scripting language here> coder the link to "A 30-minute Introduction to Rust" ( http://doc.rust-lang.org/intro.html ), and he/she will give you a strange look and not understand half of what is being said there. In the end they'll conclude it's not something for them. Give it to a C++ coder and he'll say 'oh nice, but I can do that in C++, use Boost<whatever>, because C++ is superior to all!' - and that coder there is their target audience. A decent C++ coder will have invested too much time to learn another language to solve problems he already learned to solve for himself in C++ a long time ago. Rust might be better and would make his life easier in a lot of cases, but still the majority won't make the switch.

Give the same <insert scripting language here> coder the Go documentation, and he'll be off in no time, writing better code than he used to do, producing a single binary which will not be an absolute nightmare to deploy. And that's what every coder of scripting languages has always dreamed of - being able to make programs in a simple straight-forward way, with as little dependencies as possible, without needing a <language X> runtime. On top of that, Go makes cross-compilation dead-easy.

There are a LOT more <insert scripting language here> coders out-there than there are C++ programmers. Giving them Go makes running the stuff they write more efficient confronts them with Git/source control (you would be surprised how many don't know about SCM)


I agree with you partially, but OTOH Go, I don't know exactly why, feels pragmatic and ready for production.

Maybe its because of the creators, Google backing it, or the promise that 1.x remains compatible, or that it ships with a standard library good enough to write useful server stuff.

So despite all those flaws (I miss generics the most), I think it will become the static Python replacement for the next 10 years.

(Its like how Factor handled 3rd party contributions: one library for some particular task is blessed and shipped w/ Factor. Of course it doesn't scale..)


Go, Rust, Haskell, come from diffent ways of thinking about how to solve problems using programming languages..

Go aims to be more simple and concise, in the end you write less code to do the same thing, as you would in C++, Haskell or Rust.. because those 3 languages decided to "cover everything" and are worried about other things, creating more burden to the programmer, but with something else to gain

Go is more of a productivity language.. it remind us the we have better things to do in life, and not spend all the time coding, but enjoying that extra time with your family for instance..

Therefore Go is good.. its only MAYBE "not good" for the same thing that Rust, or Haskell are..

Besides.. this is the wrong way to market some technology or idea.. the best way would be "Why Rust or Haskell are Good" instead of envy the success of others..

Its all about tradeoffs.. and i think this article misses the point.. and care only some things that will obviouslly make some languages more fit.. like, if you care more about memory control,type systems and safety.. its obvious that Rust and Haskell will look good and "correct"

But this is not all about it.. theres team working, productivity.. its a balance.. and always depends of the problem domains.. some language are more fit than others.. theres no need for bashing


You made the claim that "in the end you write less code to do the same thing, as you would in C++, Haskell or Rust". Can you provide any examples where the Haskell equivalent isn't more concise than the Go equivalent?

As a data point, here are links to the Haskell and Go implementations of the TechEmpower benchmarks:

Haskell (78 sloc)

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

Go (164 sloc)

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...


Anyone can encapsulate or "hide" the inner workings so the resulting code will look small.. how much of that code in the frontend is already implemented in the standard library of each language??

To see the reality of it.. a better example would be something without any support library...

Cherry picking is easy..

From the same benchmarks Game:

spectral-norm - Go

http://benchmarksgame.alioth.debian.org/u64/program.php?test...

spectral-norm - Haskell

http://benchmarksgame.alioth.debian.org/u64/program.php?test...


Picking something without a library will give an advantage to the other language in most cases because Haskell is very library based.

I didn't cherry-pick btw, I just went to the first benchmark which included community-written snippets of both Go and Haskell I could think of. Cherry-picking would be me intentionally skipping over examples such as the one you provided and instead posting my example.


I´ve provided the links as a example of cherry-pick from my part.. showing that its easy to come with something that looks good..

FP tend to be more expressive, and make you feel more powerful.. but they also tend to be more complex and more verbose.. thats the price of power.. it's the trade of

I think its not enough to show something simple, but with the real complexity hidden in some hidden layers..

Also if line of code is the measure of simplicity the Brainfuck language would win everything.. but it doesnt mean you can read what the code express with less effort..

Nothing against Haskell in particular, only that it doesnt shine when the matter is simplicity..

Expressiveness.. Abstraction power.. shure.... but that was not the original point i was making in the original article


>>I´ve provided the links as a example of cherry-pick<<

And they make curious examples of code-size when -- "Each program must implement 4 separate functions / procedures / methods like the C# program."


> As a data point, here are links to the Haskell and Go implementations of the TechEmpower benchmarks:

You can see from the messages in the code that these 2 programs do completely different things.


I don't know that much about Rust, but Haskell is among the more concise languages out there, and definitely more so than Go.


> Go aims to be more simple and concise, in the end you write less code to do the same thing, as you would in C++, Haskell or Rust..

I sincerely doubt that Go is more concise than Haskell. I won't say whether concise is good or bad, but I very much doubt that Go is the most concise.


Go doesn't allow to create reusable, zero-cost abstractions so I guess it's not more concise than C++ either.


I'm reading that while doing my daily Java code and I stumble on this error :

Bound mismatch: The generic method immutableEnumSet(Iterable<E>) of type Sets is not applicable for the arguments (Integer). The inferred type Integer is not a valid substitute for the bounded parameter <E extends Enum<E>>

Sorry for advanced type systems but I really want to go back hacking Go code :)


Who holds Java up as a poster child for advanced type systems? You've mad an unintentional straw man argument.

In this particular case, you're using an Integer where a subclass of Enum is called for. The value you're passing may very well be a valid value for the Enum you're trying to use. You may be doing something silly, or it's possible that this is an artifact of generics being bolted on to Java after the fact. In this case, it may be possible for the compiler to infer which Enum you really wanted and insert a runtime check and cast, but then it would be doing you a disservice by silently inserting an opportunity for a runtime error.

I'm not advocating Java's type system, but be glad Java at least has typesafe enums. Every couple of years (across several employers) I run into a very bad bug due to two different return code enums happening to use either 0 or 1 as their "ok" values, and enums of one type being silently cast to the other, resulting in silently okay behavior in the common case and spectacularly bad error recovery in corner cases. Just today I fixed an error where someone had designed an API where an enum of one type was passed to a function needing an enum of a different type, without any translation. Some users had gotten correct behavior by passing an enum of the incorrect type to the API, and some users had followed the API documentation and gotten wrong behavior.


Java's type system is hardly advanced; in fact it's quite primitive.


You have an Integer where it expects a Set.


I've recently tryed out Go for its Unicode integration. What I liked at first sight...

* the indexing makes a map[something]boolean act like a set. Sets and maps are so similar it always felt wrong for them to be two separate constructs.

* making exported functions/vars/etc begin with a capital letter. When naming important stuff, it's a relief not worrying about naming conflicts with keywords. When naming locals, just use i,j,p,q,p2 anyway.

* using defer and recover instead of catch and finally. The catch clause is really two functionalities rolled into one and using defer and recover decomplects them.

Other languages should copy those.

What I'm concerned about...

* printf and regex notation are used so much they're really part of the language, but have an entirely separate syntax embedded within strings which must be learnt. But unlike the rest of Go's syntax, they're unintuitive, especially regexes for Unicode. Unicode is meant to be one of Go's strengths. I understand quick parsing is one of Go's primary reason's for existing, but the regex and printf notations could have been cleaner. When you think about it, why are the arithmetic and bitwise operators generally part of a language's primary syntax, but string matching and formatting delegated to sub-languages?

* statements, like if/else and ++/--, don't return values in Go which is hard to get used to. I understand making statements generally be shorter so the code looks good after running through gofmt could have motivated this.

Overall, I think Go's a systems language intended for quick parsing and eliminating C++'s complexity, and the author's comparing it to languages with far higher level constructs. The correct solution is for people to implement languages like Haskell and Clojure in Go, making them execute as fast as possible.


>The correct solution is for people to implement languages like Haskell and Clojure in Go, making them execute as fast as possible.

Uh, what? Haskell already compiles to native code, and is faster than Go in many cases.

Also, if you were implementing a programming language, there are much better languages to do it in than Go.


Depends on your particular value of "better." If you're optimizing for programmer time, writing a language in Go is not a bad tactic. You get out of having to write your own GC, you can incorporate a few nice concurrency features with little effort, and you still get pretty good performance. (Admittedly, far from the best performance, though.)


Writing a language in Go, which doesn't have sum types won't be fun.

Also, if you want to implement a compiler, and not an interpreter, the host language having GC is not very useful to get GC.

Haskell is going to make working with ASTs much easier and safer. It also has a superset of the concurrency features of Go.


I'm writing a language in Go. It is fun. Sum types (and ASTs) are implemented with interfaces which provide a pretty clean and type safe way of doing this.


Can you show some small example encoding of an AST in Go?


The code for Twik has a simple AST in it. Of course, it's Lisp, so it's perhaps too simple.

http://blog.labix.org/2013/07/16/twik-a-tiny-language-for-go


The duplication there in each AST node kinda hurts the eyes :)

Also, the type-switch on interface {} is ugly.

Consider how an AST looks like in Haskell:

  data AST
    = LiteralInt Int
    | LiteralFloat Float
    | List [AST]
    ...
And then an eval looks like:

  case ast of
    LiteralInt int -> ...
    LiteralFloat float -> ...
    List nodes -> ...
and it is safe, rather than interface{}, you get the AST type and you get exhaustiveness checking that you covered all cases.

Also, if you add an annotation to each AST element (e.g: inferred types), you can very easily and safely map over them, etc.


http://golang.org/pkg/go/ast/

Well, not that small, i guess :-)


>You get out of having to write your own GC

You also don't have the ability to write a GC-less language, because there is no practical way to write non-GCed Go code.

>you can incorporate a few nice concurrency features with little effort

You also can't implement any custom OS-level concurrency features.


You also don't have the ability to write a GC-less language, because there is no practical way to write non-GCed Go code.

True, so you probably don't want to do that.

You also can't implement any custom OS-level concurrency features.

True. But for "journeyman" level language implementation, the toolset is quite good.


Most of those seem to be things that could arguably be "implementing a language that compiles to Go and then leverages the Go compiler & runtime", which is pretty much orthogonal to "implementing a language in Go".


>You get out of having to write your own GC

No, you really don't. The built-in GC is nothing like the GC needed for lots of other languages.


The built-in GC is nothing like the GC needed for lots of other languages.

Don't write those languages in Go.


That kind of defeats the whole idea that it's a good language for writing other languages in.

Not to mention that to use the GC, you'd have to carry the whole baggage of Go's runtime with you in the new language. And that's Go's GC is not that good in the first place.


There are different levels of language implementation. Doing it in Go is more of a "Journeyman" level. If something like the Go runtime is going to enable more people to have written an interpreter, then this is a good thing. Most people's implementation of GC will probably be no better or worse than Go's.


FYI, to create a set in Go you can also use struct{} for the value. map[something]struct{}. The value takes up zero space then.


Dlang has scope(exit) that does the same thing as Golang's defer.


Every single thing listed in the article can be added to Go at any point, since it currently has a very minimal feature set. Once something like generics are added, they have to support it until the end of time or risk having an unstable API like Rust did for a while there.


> or risk having an unstable API like Rust did for a while there.

Every language, including all the languages described in this article, goes through a period of instability while it figures out what works and what doesn't.


Sure, I'd just say that Go has been extremely stable since before 1.0 (~2 years).

Author's point was that we should not use "not good" languages for the fear that we might be stuck with them for next 20 years. I'd rather be stuck with a language whose designers are very resistant to change vs one that gets features haphazardly bolted on every few years (PHP comes to mind).


>I'd rather be stuck with a language whose designers are very resistant to change vs one that gets features haphazardly bolted on every few years (PHP comes to mind).

I absolutely agree! However, I don't think Rust will continue to go through wild changes for much longer. My guess is that it will settle and become pretty fixed.

And Haskell certainly doesn't introduce breaking changes very often.


> However, I don't think Rust will continue to go through wild changes for much longer. My guess is that it will settle and become pretty fixed.

We're shooting for the end of the year, in fact. You can view the list of backwards incompatible language changes here: https://github.com/rust-lang/rust/issues?labels=P-backcompat...


And Haskell certainly doesn't introduce breaking changes very often.

Actually, I'd say instability is one of the significant challenges with adopting Haskell for long-lived production code. For example, there have been a few discussions in various forums and blogs recently about how much of Real World Haskell no longer even compiles on the latest GHC and current versions of libraries. RWH is a book that rapidly became the go-to text for new Haskell developers only a few years ago, so we're not talking about either bleeding edge functionality or a length of time where software written back then has probably been retired here.

So stability is perhaps an area where Go does have an advantage over the likes of Rust and Haskell today. Rust is still evolving as a new language inevitably will; it has not yet reached the level of stability needed for long-term production use by the general programming community. Haskell is also still evolving, but for a different reason: it is valued as much for being a test bed of bleeding edge programming language design as it is for being a practical programming language.


To be fair Real World Haskell was written in November 2008, and IIRC that syntax was valid up to GHC 7.6.3 (needs citation) or 21 April 2013.


>And Haskell certainly doesn't introduce breaking changes very often.

Perhaps Haskell the core language doesn't but ghc certainly does, in every major version.



>Every single thing listed in the article can be added to Go at any point

Every single thing listed in this article could be added to TI BASIC at any point. It would just require a complete re-formulation of the language into something completely unrecognizable.


> Once something like generics are added

The <> notation in id<T>(item for generic bracketing is harder to read than other bracketing symbols, e.g. () [] and {}. Unlike those others, angles are used for comparison ops and arrow heads also. If Go ever introduces generics, the Scala-like [] notation looks cleaner and would fit into Go's existing grammar.


The only point of confusion I can see out of using [] is:

    type Foo[T] T
versus

    type Foo []int
for example.

    type Array[T] []T


Yes please. You'd also avoid some icky non determinism in your parser.


Type inference and operator overloading often leads to less-readable code.

IMO when evaluating programming languages, we should not only consider writability but also its readability. This is especially true if many engineers are going to be involved in the development.

It is good to have type inference and operator overloading in terms of writability. Nobody wants to type verbose code.

OTOH some verbosity within the source code helps reading the code. And type information (which type inference and operator overloading tries to hide) is one of them.

So I can respect Go's decision not to support operator overloading / only support some part of type inference.


Sheesh. A laundry list of issues. So why use Go at all? Quit yer whining and just use Haskell or whatever. Go works for some people, not for others. Why hang around and complain? Move on, use something else.


>So why use Go at all? Quit yer whining and just use Haskell or whatever. [...] Why hang around and complain?

Well, I think it's a good idea to get people thinking about programming language design. Sometimes it's really hard to tell what's wrong with something if you don't know about anything better.


>Go works for some people, not for others. Why hang around and complain?

Because if everybody was selfish and self-absorbed enough to do that there wouldn't be any evalutation of languages outside the personal level?

The way things move forward is through (1) criticizing stuff, (2) fixing stuff, (3) making new stuff (in that order). And all three steps are necessary.


As a love-hater of C++, Go did indeed feel like nothing new under the sun.


The author is using a sharp blade as his implication for good: features that make a language more complex.

If the added complexity is "good" to you, then fine. In modern systems, simplicity is a powerful debugger.

Adding all those features that the author talks about -- "Constraint-based Generics and Parametric Polymorphism", "Algebraic Types and Type-safe Failure Modes", and "Pattern Matching and Compound Expressions" -- even if useful, would defeat the purpose of Go.


>In modern systems, simplicity is a powerful debugger.

I don't consider the features I mentioned in this article to constitute "complexity".

I think that Haskell is a beautifully simple language, in the same way that e.g. Euler's identity is beautifully simple. The reasoning behind it may be somewhat complicated, but the result is very simple (and impressive) to behold.


So why not just use Haskell? Why didn't Haskell take over Go's niche?


Because Go is pretty good at its niche, and it has a huge amount of corporate support (from Google), which can make even a mediocre language seem very attractive.


I was around for the Java vs. Smalltalk keruffle. I have to say that Go is less hype and more good, pragmatic design than Java. Go also started out with quite respectable speed and a pleasant development experience, which wasn't true for Java.

Also, as for a "huge amount" of corporate support, if we see the equivalent of "Enterprise Java Beans" then I'll concede that Go is "another Java."

Similar to Java: The barriers to outreach are small because the language is in many ways familiar. I think that's a wise and pragmatic choice.


Because they're completely different languages. The philosophies of the two languages are completely opposite in almost all respects (purely functional vs. procedural, highly general vs. not-general-at-all, and so on and so forth). If you find Go interesting you probably don't find Haskell interesting and vice versea.


I do (well, Scala), and it's much nicer. So it's frustrating to see all these articles praising this Go and encouraging beginners to learn it, or claiming it's innovative when it's nothing of the sort.


People use the word "complex" in different ways. Do you mean number of features? Do you mean the size of the compiler? By some measures, operator overloading adds complexity; By another measure, it add simplicity.


I use complexity here the same way the Go community does: it is whatever its authors and users consider it to be. So chances are, if it slows down compilation, it's complexity. If it adds significantly to the grammar or syntax or keyword list, it's complexity. If it does things that can already be done, just differently, it's complexity. If it makes code less clear, it's probably complexity. (You get the idea.)


> I use complexity here the same way the Go community does

To mean anything Go doesn't currently have?


> I use complexity here the same way the Go community does

It is more important for the implementation to be simple than the interface.

-- http://dreamsongs.com/RiseOfWorseIsBetter.html


>If it does things that can already be done, just differently, it's complexity.

Doesn't that make programming languages themselves complexity? After all, you can already do anything in assembly.

That seems like a really weak argument. Sometimes having a more "complex" (in your terms) system leads to simplicity. For example, programming languages in general. Programming languages add "complexity" to computers over machine language programming, but the result is that making a program is a much simpler task.


Possible error:

>If you want to modify a data structure, you have to create an entirely new data structure with the correct changes. This is still pretty fast because Haskell uses lazy evaluation.

I believe the issue is persistent data structures -- the new data structure "remembers" the old one (instead of recreating it) and records changes. (Clojure works like this as well) -- and not lazy evaluation.


I meant that, despite the fact that adding an element to a tree in Haskell naively constitutes "making a new tree", Haskell usually doesn't end up actually make a whole new tree (because of lazy evaluation), so trees are still really fast.


That's still not quite right. It's not lazy evaluation that makes it feasible to "make a new tree" in Haskell (although it doesn't hurt), it's the fact that the new tree shares most of the state of the old tree and the data structures make the asymptotic time complexity of both update and retrieval very close to (but not quite) O(n).

If you haven't looked at persistent data structures yet then I'd definitely recommend doing so because they are fascinating. A few people have written about Clojure's data structures and the following article looks like it gives a good introduction:

http://hypirion.com/musings/understanding-persistent-vector-...


That's fair. I'll fix the article. Thanks!


I agree with almost all points but still think Go is a good language for the long term because it follows in the tradition of worse-is-better design. And while worse-is-better produces systems with a lot of warts, it seems to work better in practice than systems that do it 'right'.


"A Good Solution: Operators are Functions"

NO. No. No.

Allowing users to change the language specification and side effects on a per-file, per-project, per-anything basis is a terrible terrible terrible idea.

"This is all covered in Knuth, and we don't have time to go over it again."


Perhaps you've not used Haskell?

There are no "built in" operators in Haskell, other than " ", which is function application, and which cannot be overridden. Additionally, any "infix operator" can be used in prefix form, by surrounding it in parens - and any function name can be used in infix form, by surrounding it in `backticks`.

All other operators, like +, -, $, <, <$>, >>=, are defined in libraries (specifically in Prelude, the "standard library") - these operators cannot be overloaded in ad-hoc manner as operator overloading is done in other languages (bar qualified module imports) - to make use of one of these operators you must implement an instance of the class which contains it, such as Num for +, -. There's also the requirement to implement fromInteger, signum, negate, etc in Num. (If you can't think of a valid "fromInteger" implementation for your custom type, it's obviously not a Num). Also, some classes have associated laws which should prevent you from using them incorrectly

Admittedly the class/instance scenario could be improved with further checking of these "laws", but that would basically require a full theorem prover baked into the language - something that Haskell will probably get sooner or later.

There's still what I would consider "operator abuse" in Haskell - it's not that of overloading existing operators, but that of introducing new operator aliases for virtually every function in your library, as http://hackage.haskell.org/package/lens-4.1.2/docs/Control-L... does (yuck).

A nice convenient feature of Haskell is that you can scrap the official Prelude library (-XNoImplicitPrelude) and roll your own, as some others have done (e.g, http://hackage.haskell.org/package/classy-prelude). This allows you to effectively "clean up" some of the warts from the early design of the language, so it can continually improve - rather than needing to invent a whole new language when we decide it's not what we want. Num is an example of a class which gets lots of stick, because we'd often like to implement either of addition or mulitplication, but not both.


"users to change the language specification" ? How does that make sense?


"Go does not support immutability declarations."

Doesn't Go have constants (const) which are immutable? What am i getting wrong here?


Those are just compile time constants, you cannot have a value computed at runtime stored in an immutable variable (i.e. one the compiler will complain about if you attempt to mutate it, to assist with program correctness).


maybe he means a immutable var.. assign once, and never changes.. its not the same as the const construction in Go


Posting articles about how Go sucks because it lacks generics and operator overloading has become the "I'm going to be different and grow a moustache" of the programming set.

Go is a tool in your kit like any other language. You can't blame the architects for not providing an end all be all solution for every person's needs 100% of the time.


>I like Go. I use it for a number of things (including this blog)

and yet it goes down once it is posted on HN. I have had some articles of mine end up on HN and I never went down, even when my blog was still WordPress hosted on a machine of mine.

This is probably more of a shortcoming of the various cloud providers than of the language/environment itself I guess.


Not a great metric. My blog once got unexpectedly slash dotted (ended up on Reddit and HN front pages for a day; silly enough the post was the one I put the least work into by far and the idea was not even mine). I was at the time running WordPress with WP Supercache using only right PHP workers. The site never had a hiccup. Ergo PHP is the best language out there? :)

In reality, do use something like WP Supercache. It will save your hide.


> and yet it goes down once it is posted on HN.

The "Retry for a live version" button is powered by Cloudflare, which also uses Go for many of its core services. So I'd be willing to bet it's the programming environment and/or the particular program, not the language.


Dang it! I wasn't intending this to hit HN so soon. I was planning beefing up my VPS before I shared this one on HN. Sorry. I'll try to keep it up.

If you just let it hang, the Cloudflare cached page should kick in.


programming language semantics and server availability are distinct topics. If you write a beautiful Haskell program that generates n+1 queries on a database, it will fall over. That doesn't make Haskell bad. Nine times out of ten, making a blog survive the heavy traffic of something like being on HN front page just means knowing how to cache things.


I see you wrote "It would be nice"


kudos, i totally agree with your arguments.


We've seen this same article re-written countless ways. Seriously, this is (intentionally or not) a rewording of every existing criticism of Go, by people who complain that it isn't a language that it isn't.

No, Go's solution to generics is not interface{}. The moment you say that, you have lost. You are trying to fight Go and make it a language that it is not.

Always remarkable that such critiques always focus on the utterly trivial, while absolutely ignoring things like concurrency or composing complex systems. As always, the color of the shed is what the laymen want to argue about.


But the fact that Go proponents don't actually have solutions to those problems is an issue.

How do you make a custom, generic data structure without syntax overhead? I have not seen any counter proposal to this aside from "maps should be enough for everybody".

How do you avoid the noise from not having operator overloading or a similar alternative? This, again, goes unadressed.

What are the succint alternatives to functional abstractions for quickly processing collections of data?

"Just use a channel" doesn't really ring like a reasonable alternative to these questions.


How do you make a custom, generic data structure without syntax overhead?

In real-world code, the need for generic data structures is shockingly uncommon. It really is. This requirement exaggeration comes about by people acting as language tourists, building amorphous code of uncertain purpose, where things like "I'm going to sum up a bunch of unknown objects" seems like a serious need.

For most people who find Go to be a compelling language, it excels for practical, real-world needs.


Notice how you're not giving an answer to Daishiman's question, merely downplaying the importance of having general data structures, though I'm sure you're quite happy that slices, maps and channels can be parametrized by the type of data they contain. Saying that general data structures are uncommon in the "real world" sounds like an example of Sapir-Whorf. I use Java at work and OCaml for a personal project and in both of them, having data structures that can be parametrized is very helpful. For instance, writing an AST with a type parameter allows to go from a AST<String> (generated by the parser) where ids are strings to a AST<Symbol> where a node's id is now a symbol (generated during semantic analysis).


You're not answering my question. I have a bunch of numeric code where I have dense matrices, sparse matrices, diagonal matrices, etc.

I have heterogenous priority queues where I want to push in JSON data and plain strings, I have custom iterators, concurrent data structures, default dictionaries, etc.

Seriously, look up Python's itertools and data structure modules and realize that there's a wealth of things that are useful and practical and, above all, reduce code size while preserving interface contracts and semantics. Go is completely unsatisfactory in this regard.


Haskell makes Go's concurrency and composition look primitive and restrictive while also giving you safer code with generics. This isn't an area that Go wins at unless you're comparing it with C, not anything modern. The lack of generics is also not a primitive issue, neither is the ease at which people can write unsafe code. You can write as unsafe code as you like in Haskell, but the pat5h of least resistance also happens to be the safest. this is my biggest issue with go; it insists on using unsafe ideas when it's simply not necessary and puts unnecessary burden on the programmer to ensure things are safe; computers exists to make lives easier, and Go ignores that


Haskell uses less code to do the same thing.

Haskell holds your hand and checks more things for you.

Haskell code runs from two times, to five times faster for CPU based code, and lets you scale to insanely higher concurrent workload in a safer way.

Why Go? Because marketing.


Smalltalkers used to complain that:

    Smalltalk uses less code to do the same thing.
    Smalltalk lets you offload bookkeeping to the runtime.
    Smalltalk code often runs faster for actual business workloads.
    Smalltalk lets you add features faster with fewer bugs.

    Why Java? Because marketing.
But doing this is was a waste of community time and energy. A better thing to do is to build cool stuff.


> No, Go's solution to generics is not interface{}. The moment you say that, you have lost. You are trying to fight Go and make it a language that it is not.

What is the solution?


There isn't one. Go, by design, does not provide generics as they would complicate the language and for insufficient benefit. (according to at least one of the authors)

While I do agree that Generics can open up a whole new dimension of programming concerns. I think they are worth the additional syntactic complexity, because they allow you absolute accuracy when it comes to types. If we as a programming community want to ever get to the point where we have provably correct programs, or even reasonably correct programs, clear definitions of functions for an exact set of types is a necessity.


And conversely if you don't care about types, you might as well use a dynamic language and get something concise and flexible. A static type system without generics is the worst of both worlds; it gets in your way a lot while still not providing guarantees of type safety.


Since concurrency and composition are important, why doesn't Go have support for immutable variables and monitoring/linking ? These are features proven to make it easier to reason about concurrent systems and to manage failures in a distributed system. From what I can tell it is completely impossible to implement supervisors in Go.


Immutable values will complicate a type system and implementation. The Go creators are very strict about adding features to the language without enough justification, which is one of the biggest features of the language imo.

There are more synchronization features than channels in Go. Channels and switches solve most problems very well. However when another synchronization method is just simply required, check out: http://golang.org/pkg/sync/


Refusing to complicate the language is nice, unless it means complicating or making every program in that language less safe.

Repeatedly, Go chooses the latter, and many people hail it while writing programs that crash on nil dereferences or duplicate their code for various basic types.


It is fine to say you want a simple language, but then please do not talk up how great it is at concurrency. It lacks basic features common to every other language with a good concurrency story.


Go is absolutely fantastic at pragmatic, practical concurrency. In most real-world cases, that does not include a strong immutability base, for instance.

Here's the thing about the "Go sucks because Haskell is the best language ever" retort: Haskell has been around for decades longer than Go. It has made essentially zero impact, and even for the case of many of those who use it as the "my big brother" comparison against Go, it isn't a viable part of their daily toolset.

It's a theoretical solution that just makes for a nice checklist comparison against Go. You know this is true. We all know this is true. And everyone goes back to Java or C# or whatever else is your daily driver.

Yet people are making Go their daily driver. Solutions are being built, en masse, in Go. People are having great degrees of success with Go.

Isn't that weird? Might it be that Go adds primitives in a way that makes them usable and intuitive, without becoming strictly theoretical?

So people can keep posting these "Haskell, which I don't actually use in any credible way, is way better" articles, but they simply miss the point. They really do.


Haskell is one example; Clojure, Rust, Erlang, Scala are mainly the ones we think about in this category but even in Modern C++ code const is used quite a lot. Go is not yet very widely known or popular outside the HN bubble; I think it is a bit early for "daily driver" argument.


I saw an ad for Go programmers in a taxi once...


A very pragmatic response.


> Always remarkable that such critiques always focus on the utterly trivial, while absolutely ignoring things like concurrency or composing complex systems.

But generics are a fundamental tool to solve concurrency or composition. How do you propose to compose complex systems when you can't abstract on the type? How can you add new concurrency constructs that work safely for every type without generics?


Actually Go interfaces map pretty well to a concurrent (and likely networked environment). It's often straightforward to put a network-based implementation behind a Go interface type. When was the last time you saw a web service that is parameterised by a type?


I guess we have a different approach about what to expect in a programming language.

Maybe if Go was dynamic or with full type inference, will like it more :)

I bet is great for the niche between C++ and Java, but not sure if I would want to use as a general programming language


Haskell (with third party library)

http://snapframework.com/docs/tutorials/snap-api

main :: IO () main = quickHttpServe site

site :: Snap () site = ifTop (writeBS "hello world") <|> route [ ("foo", writeBS "bar") , ("echo/:echoparam", echoHandler) ] <|> dir "static" (serveDirectory ".")

echoHandler :: Snap () echoHandler = do param <- getParam "echoparam" maybe (writeBS "must specify echo/param in URL") writeBS param

-----

Rust (with third party library)

https://github.com/chris-morgan/rust-http/blob/master/src/ex...

//! A very simple HTTP server which responds with the plain text "Hello, World!" to every request.

#![crate_id = "hello_world"]

extern crate time; extern crate http;

use std::io::net::ip::{SocketAddr, Ipv4Addr}; use std::io::Writer;

use http::server::{Config, Server, Request, ResponseWriter}; use http::headers::content_type::MediaType;

#[deriving(Clone)] struct HelloWorldServer;

impl Server for HelloWorldServer { fn get_config(&self) -> Config { Config { bind_address: SocketAddr { ip: Ipv4Addr(127, 0, 0, 1), port: 8001 } } }

    fn handle_request(&self, _r: Request, w: &mut ResponseWriter) {
        w.headers.date = Some(time::now_utc());
        w.headers.content_length = Some(14);
        w.headers.content_type = Some(MediaType {
            type_: String::from_str("text"),
            subtype: String::from_str("plain"),
            parameters: vec!((String::from_str("charset"), String::from_str("UTF-8")))
        });
        w.headers.server = Some(String::from_str("Example"));

        w.write(b"Hello, World!\n").unwrap();
    }
}

fn main() { HelloWorldServer.serve_forever(); }

-----

Go (native)

package main

import ( "fmt" "net/http" )

func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:]) }

func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) }

-----

Yeah, ok.


Your primary criticism of Rust is that one (community-maintained, and not "official") Web package requires setting headers explicitly. OK.


Node.js, native:

    var http = require('http');
    http.createServer(function (req, res) {
      res.writeHead(200, {'Content-Type': 'text/plain'});
      res.end('Hello World\n');
    }).listen(1337, '127.0.0.1');
What's the point here?


Since none of your examples are actually doing the same thing, I'm not sure what you're trying to say here. That being said, I've reformatted the code from the original post below (presented without comment):

-----

Haskell (with third party library)

http://snapframework.com/docs/tutorials/snap-api

    main :: IO ()
    main = quickHttpServe site
    
    site :: Snap ()
    site =
        ifTop (writeBS "hello world") <|>
        route [ ("foo", writeBS "bar")
              , ("echo/:echoparam", echoHandler)
              ] <|>
        dir "static" (serveDirectory ".")
    
    echoHandler :: Snap ()
    echoHandler = do
        param <- getParam "echoparam"
        maybe (writeBS "must specify echo/param in URL")
              writeBS param

Rust (with third party library)

https://github.com/chris-morgan/rust-http/blob/master/src/ex...

    //! A very simple HTTP server which responds with the plain text "Hello, World!" to every request.
    
    #![crate_id = "hello_world"]
    
    extern crate time;
    extern crate http;
    
    use std::io::net::ip::{SocketAddr, Ipv4Addr};
    use std::io::Writer;
    use http::server::{Config, Server, Request, ResponseWriter};
    use http::headers::content_type::MediaType;
    
    #[deriving(Clone)]
    struct HelloWorldServer;
    
    impl Server for HelloWorldServer { 
        fn get_config(&self) -> Config { 
            Config { 
                bind_address: SocketAddr {
                    ip: Ipv4Addr(127, 0, 0, 1),
                    port: 8001
                }
            }
        }
    
        fn handle_request(&self, _r: Request, w: &mut ResponseWriter) {
            w.headers.date = Some(time::now_utc());
            w.headers.content_length = Some(14);
            w.headers.content_type = Some(MediaType {
                type_: String::from_str("text"),
                subtype: String::from_str("plain"),
                parameters: vec!((String::from_str("charset"), String::from_str("UTF-8")))
            });
            w.headers.server = Some(String::from_str("Example"));
    
            w.write(b"Hello, World!\n").unwrap();
        }
    
    }
    
    fn main() {
        HelloWorldServer.serve_forever();
    }

Go (native)

    package main
    
    import (
        "fmt"
        "net/http"
    )
    
    func handler(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
    }
    
    func main() {
        http.HandleFunc("/", handler)
        http.ListenAndServe(":8080", nil)
    }
    
Yeah, ok.


prefix code lines with two spaces to get a pre tag.

  like
  this




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: