Hacker News new | past | comments | ask | show | jobs | submit login
On Go (dehora.net)
147 points by ishbits on June 2, 2013 | hide | past | favorite | 145 comments



The lack of exceptions just seems like a total bear to me. I wrote C for 10 years and did tons of stuff like:

    Open file "foobar.dat". If that fails, abort with this error.
    Read the first 4 bytes into x.
      If that fails, abort with a different error.
    Seek to byte x. If that fails, abort with yet another error.
and so on, and so on, over and over again. Python's exceptions are a huge improvement over this pattern of timidly asking permission to do anything. The fact is there are so, so many occasions where you want to abort a computation if one of the many steps along the way goes wrong. Something like Haskell's Maybe monad is a good way of attacking the problem too.

But Go has neither. It seems to just offer the bad old clumsy C way and say, "Deal with it." To those who have written real Go programs, I'm honestly wondering: how is this not a pain in the ass?


Unlike C, Go isn't overloading what is returned, it has an extra param, and the language has been baked to handle doing like initialization and checks in single line if statements. It does force the developer who might actually have a clue what the exception is and how to fix it to handle it, but IMHO this is a good thing. Go forces lots of things like this (not using an import, won't compile; not using a variable, won't compile).

Honestly, compared to the exception hellscapes I have had to deal with in Java and C++ --- it seems like the path of least surprise. Which incidentally has been my favorite things about Go, the low number of surprises.

A lot of using Go in real work has gone against my expectations. There are a lot of things I initially saw as huge warts (lack of exceptions, generics and import versions), but I liked channels enough (Erlang background) to give it a shot. So far, I have been delighted by using it as a stack (build cycle, deploy method, terse C'ish syntax).


To make one correction, it isn't an "extra param", go has full support for multiple return values, full stop.


The problem with this "forcing" that go does is that it ALSO includes _, which means it's inevitable that lazy developers will get tired of handling error and just shunt it into _.


You can't stop people from being lazy. Look at all the Java and Python code that forcefully ignores exceptions.

It is something that can be caught with static analysis, however. Someone recently put together an appropriate tool[1] for Go, in fact. It seems to work very well.

[1]https://github.com/kisielk/errcheck


To be fair, the ability to use static analysis for error code checking is not something that is unique to Go. There was a paper recently on doing this for C (which found hundreds of bugs in the Linux kernel due to incorrect error code handling):

http://pages.cs.wisc.edu/~liblit/ghc-2011/ghc-2011.pdf

(Incidentally, the hard part of this analysis is not verifying that you checked the error, it's verifying that you propagated the error codes properly—that requires analyzing higher-order control flow.)


Go AST is something that I think will really help as it grows for writing great tooling.


To allow lazy error checking without losing safety, perhaps Go could special case _ so that error codes assigned to _ abort for failure values.


I would love something like this, actually.

I'm not sure I like the idea of special casing _ specifically, but a similarly concise way of saying "if this fails, this thread of execution is FUBAR" would be great.

"!!", maybe?


Could just be "!" by itself no?


They can't change that at this stage without breaking the Go 1.0 compatibility promise.


Unlike C, Go isn't overloading what is returned

In C, you can write code that doesn't overload errors and return values; e.g.,

err = someFunction(&returnValue, param1, param2, param3);

I'm not saying this is common -- and in fact the standard C library does a lot of the overloading you're talking about -- but for your own code and functions, you can separate out errors and return values, as shown above.


Writing exception-safe code is non-trivial. There has been a lot of debate recently about whether exceptions are a mis-feature. The main worry is that if you allow exceptions sensible-looking code can lead to many non-obvious bugs. This link gives some insight:

http://stackoverflow.com/questions/1853243/c-do-you-really-w...

Some more reasons why exceptions are problematic:

http://mortoray.com/2012/04/02/everything-wrong-with-excepti...

Go's choice I think is very well considered. If reliability is important to you then I believe handling errors explicitly leads to clearer, more correct code.


These criticisms of exception handling are primarily based on their implementation in C++; they are not issues inherent in exceptions. Exceptions are clearly more problematic when they are bolted-on after the fact to a language with manual memory management and a large existing body of code that is unaware of them.

But writing exception-safe code IS trivial, in basically any language except C++. Garbage collection is all you need in most cases, and try-with-resources/RAII/etc takes care of closing I/O.

Note that Go still needs defer() to reliably take care of the latter, so it's not clear to me what is gained by the omission (or rather, strong deprecation) of exceptions.


I'm writing a pretty big system in Go, and had not seen this as much of an issue. Could you provide more insight into your point? I'd like to see some code to compare, if its not to much to ask.


I'm not sure how great of an example it is, but I had the thought when recently rereading this routine from an old game, reading its level file:

        static void scanlevel(int num, FILE *fp)
    	{
    		filemap_t filemap;
    		int i;
    
    		if (fseek(fp, mapptrs[num-1], SEEK_SET))
    			error("Can't load level %d from blockman.lvl", num);
    
    		if (fread(&filemap, sizeof (filemap_t), 1, fp) < 1)
    			error("Can't load level %d from blockman.lvl", num);
    
    		if (filemap.startx < 0 || filemap.startx >= LEVWIDTH
    			|| filemap.starty < 0 || filemap.starty >= LEVHEIGHT)
    		{
    			error("Level %d is corrupt", num);
    		}
    
    		map.startx = filemap.startx;
    		map.starty = filemap.starty;
    
    		for (i = 0; i < LEVWIDTH*LEVHEIGHT; i++)
    		{
    			map.tiles[i] = (tiletype_t)filemap.tiles[i];
    			if (map.tiles[i] < 0 || map.tiles[i] >= NUMTILES)
    				error("Level %d is corrupt", num);
    		}
        }
This shows another benefit of exceptions, which is that if uncaught, they stop the program with a traceback of the exact point they occurred. So it's not even necessary to write most of the checks above. `error` here is a routine that aborts the program; you get that behavior by default with exceptions.

Whereas in C/Go if I forgot one of those error checks, the error would occur silently, leaving the program in some weird inconsistent state that I never planned for. It would just do something stupid and maybe crash or panic later on, far from the place where the initial error occurred.

I guess I'm just arguing for exceptions, which is old news as languages that have them have been around for quite a while. But Go doesn't offer much of a substitute of which I'm aware. The explanation of how it solves these problems has not been forthcoming.


I think the key difference is between assertions/"this should never happen" error checking, and actual error conditions that you want to pass back to client code, because it knows better than you do what the right thing to do is.

When you're writing reusable library code (and when you think about the scale of Google's codebase, they must have an insane amount of these libraries), it's important to make this distinction. There are some error conditions where you really just want to say "if this ever happens, just die, because there's nothing sane to be done", and Go provides panic() for these situations, similar to the error() function in your code above.

For situations where you do want to return a meaningful error to the client, I think Go's multiple return values provide a very good way to do it, far better than the overloading of NULL or -1 that you find in C and C++.


> I think Go's multiple return values provide a very good way to do it...

Better than C? Sure. But not better than languages that provide sum types; some of which have been around since the 70s.


The list of cool things Go doesn't have is a darn long one. They opted for simplicity at lots of points. I think it is worth noting these were decisions, not necessarily oversights.

There are a lot of great languages that end up mostly academic because they lack whatever the magical balance of features, simplicity and usefulness it takes for a language get mind share.

I suspect Go might have hit the magical balance with channels, strong types, great build system, simple minimal syntax and language keywords, fairly opinionated best practices (and formatting) and static single file deploys.


Multiple return values are something completely different from sum types. Just because Go by convention returns errors as an additional return values where other languages prefer sum types, you shouldn't conflate the two.


Parent didn't say they were the same, rather they implied that sum types are better.

Sum types can handle multiple return values seamlessly in a typesafe way as a special case, but are not limited to that because they may have different data shapes other than simple products, and callers can be checked to deal with each possible shape at each call site by the compiler.


Claiming sth is "better" requires two things to be comparable, and thus, reasonable similarity in their resp. nature. Claiming something is better than something else implies this similarity (otherwise any comparison would be moot), which I refuted in this particular case.

On a side note, the same people who claim that sum types are "better" are never able to come up with a constructive proposal how sum types could be integrated into Go in an elegant way.


So much hand-waving!

They are "better" in that they are, in fact, more constrained; only when the error case arises will there be any accessible error value; otherwise, the actual expected value will be found. Since go uses an ad hoc product type, you always get an error value and the return value, even if they are mutually exclusive most of the time.

Also, they are both ways to build larger types from smaller ones, and the way they go about doing it is rather obvious from their names, and thus the contrast.

> On a side note, the same people who claim that sum types are "better" are never able to come up with a constructive proposal how sum types could be integrated into Go in an elegant way.

Forgo the cutesy anonymous members for the massive benefits of sum types? For a team which prides itself for its ability to perform trade-offs, they sure were rigid in this stance.


> They are "better" in that they are, in fact, more constrained; only when the error case arises will there be any accessible error value; otherwise, the actual expected value will be found. Since go uses an ad hoc product type, you always get an error value and the return value, even if they are mutually exclusive most of the time.

But multiple return values are there for much more than returned result and error. You conflate that with the specific use of returning result and error, and based on that, you claim that sum types are better. That's a straw man par excellence.


Sum types and multiple return values are not mutually exclusive, though several languages today use tuples to emulate multiple return types (see Scala, Rust for examples).

I am not a language designer, but I have become interested in languages in the past couple of years. Sum types require some sort of generics implementation, which Go does not have. I think the design choices the authors made regarding the language have made adding generics that much harder, that now they are struggling to find the "Go way" of fitting them into the language.


This is the most illuminating reply in the subthread for me. Very good point, thanks for the clear explanation.


Does panic isolate gracefully a query-of-death, or does it take the entire server down?

Does panic provide the stack-trace?


A panic unrolls the stack until it's recovered from. If it's not recovered the entire server will go down.

If the panic isn't recovered it will print a stack trace, if it is recovered you can get a stack trace with runtime.Caller().

Go's panics are exceptions.


Thanks!



The explanation of how it solves these problems has not been forthcoming.

It'd be interesting to see what you'd do with the code above in C++, and where you'd put the error handling code for diverse errors that might occur reading this particular file.

The Go approach is to handle errors locally, often in the calling method, which makes it clear where they are handled and what the outcome is, and easier to recover gracefully, without unexpected exceptions from code in libraries or other code in the program. Some large users of C++ (like Google) refuse to use C++ exceptions in their own code - so they are not entirely without controversy.

In the code above, if you used exceptions, and relied on the libraries to throw exceptions for errors, you'd have to throw your own exception at:

error("Level %d is corrupt", num);

So you'd have a mix places where exceptions were generated (in unknown lib code, in your code) and an unknown (for the reader) mix of places where they are handled. I'm sure this could be done gracefully, but it does mean errors missed might be handled at a much higher level in the code, far away from where they were generated, which can lead to errors being missed until it is too late to do anything but output a stack trace and exit, which to the user seems equally stupid as crashing or panicking at some later point.

If you exit the program on simple errors like being unable to load a single game file, it's not very pleasant for the user - I'd expect it instead to recover gracefully and show the user an error before continuing, which is easy enough when using Go's pattern of error returns, and harder with exceptions where you have unrolled the stack possibly past the loading code, unless you start handling exceptions in calling code one level up, which looks very much like the error handling of Go. So there are trade-offs to using either method aren't there?


Not using C++ exceptions in Google's code bases is partly due to historic reasons (since the original code base did not have it), and partly due to them being "harder" to implement in C++ correctly.

They even acknowledge in their C++ style guide[1] that: " Things would probably be different if we had to do it all over again from scratch."

[1] http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...


I'm not clear on which error check you think you might "forget". If you're performing an operation that can fail, wouldn't that be a clue that you need to check for failure?

You can't actually write code like that in Go, by the way. It's not going to let you read data directly into a struct like that, nor should you really be doing so in the first place.

I'd be happy to provide a more detailed analysis, and possibly even Go-equivalent code, but without further context (at least the definitions for `filemap_t` and whatever struct type `map` is, if not a full description of the file format and its meaning), it's impractical.


> I'm not clear on which error check you think you might "forget". If you're performing an operation that can fail, wouldn't that be a clue that you need to check for failure?

Sure - and the possibility of exceeding the bounds of an array would be a clue that you need to bounds check, but there's still a hell of a lot of C code out there with array overflow errors. You can argue that people who make these errors are bad programmers, but that's fairly irrelevant - most programmers of any level will end up working with code with errors in it at some point. Exception stack traces are an extremely useful way to find out where something went wrong when someone failed to do some necessary error checking.

I have no experience with Go, so I'm not saying what it does is wrong - I'm just curious. Say a customer experiences a failure with your software caused by some missing/incorrect error handling, what do you do to work out what happened?


> the possibility of exceeding the bounds of an array

Not possible in Go, the runtime will panic.

> there's still a hell of a lot of C code out there with array overflow errors

An extremely easy error to make in many cases, which is why modern programming languages bounds-check.

> Exception stack traces

Go will give you a very nice stacktrace should it ever panic.

> I have no experience with Go

Which is really the problem. People keep arguing about Go's merits based on no substantive understanding.

It's very obvious which operations can fail without a panic in Go, because functions explicitly return error objects -- actual error objects, not magic numbers. The return signature for the Go equivalent of fread is (int, error), not (int).


> Which is really the problem. People keep arguing about Go's merits based on no substantive understanding.

They are not arguing, they are asking questions to improve their understanding.


That may have been too harsh in reply to AlisdairO specifically, in which case I apologize.

The context of the thread, however, very much is people arguing. See for example graue's post a few levels up.


While I may have phrased my post somewhat harshly, I too asked because I wished to improve my understanding, and the responses have definitely helped.


> > the possibility of exceeding the bounds of an array > Not possible in Go, the runtime will panic.

Is -B for disabling bounds checking no longer supported?


I see no such option documented, and I would certainly never enable such a thing in real code.


It was never documented, but any long time Go user knows about it.

Looking at the code it seems it has been removed.

> ... I would certainly never enable such a thing in real code.

So I assume you don't do C or C++. :)

While I agree with you, there are certain cases where it might help. That is why most strong typed languages with native compilers allow to selectively disable bounds checking, since the Pascal/Modula-2 days.

I only support doing this if profiling proves it is really worth it, give the security issues.


I think it's still there.

    package main

    func main() {
        slice := []string{"first", "second", "third"}
        println(slice[1])
        slice = slice[0:2]
        println(slice[2])
    }
Compile with

    go build -gcflags -B wat.go
    ./wat
Output

    second
    third
Without the `-B` you'll get a runtime panic. I'm on Go 1.1. (Maybe it's removed in tip?)


Ah ok, I was trying to locate it via the browser.

Next time I better checkout and do a proper grep.


> So I assume you don't do C or C++. :)

If C had a simple universal switch for bounds checking, I'd turn it on everywhere and immediately revoke commit privileges for anyone on my team who turned it back off. But it doesn't, and necessarily can't, making your statement nothing more than an annoying exercise in wrongful pedantry. It is contextually obvious I was talking about Go code and/or languages/compilers with such a switch.


I was trying you out, because if you make such a statement then I am lead to believe you stay away from languages that don't provide control over bounds checking.


See the output of

    go tool 6g -help
The `-B` switch will disable bounds checks. You can use it like this:

    go build -gcflags '-B'


A tip from someone that just recently started using Go: read the spec! I wasted some time early on trying to learn things that are very clearly and simply spelled out in the spec. The Tour + Spec is really all you need if you're a somewhat experienced programmer.

The spec is well written and tiny compared to most languages, and it's probably the only thoroughly accurate and complete Go reference at the moment.

Step 1. http://tour.golang.org/ Step 2. http://golang.org/ref/spec


Effective Go (http://golang.org/doc/effective_go.html) and Go by Example (https://gobyexample.com/) are also great. The former is especially so if you want to learn Go idioms and understand why things are done. It's a good complement to the spec.


For some reason, I really enjoy showing this slideshow to people -> http://blog.menfin.info/Presentations/20120709_Golang_introd... ... it is short, gets across a lot of the core ideas very quickly.


Yes, please read the spec. All the specs!

Shockingly few developers will read the f*cking spec for anything. It mystifies -- or perhaps terrifies -- me. It's one of the criteria I try to use to judge if I'm dealing with someone who seeks to truly solve problems, or just keep the build light from turning red.

Some specs are actually kind of fun to read. POSIX is surprisingly pleasant, and sometimes amusing in a "Why is THAT warning label there?" kind of way.

You don't have to treat them like a novel, but many are excellent reference material to keep at your fingertips.


You know the saying about warning labels. Ever warning label has a good story about why it came to be.


I love how the dialogue on HN about Go has gone from pessimism and largely uninformed criticism, to regurgitation of the team's own talking points ["simple, orthogonal features"], to a more nuanced appreciation of what trade-offs Go makes.

Hopefully these are individual developers shifting through a continuum of enlightenment, rather than the conversation itself migrating to a more enlightened population.

This last question is testable, of course, though sadly HN does not offer an official API.


Those of us who think that Go is a truly sad language to release at modern times (nullability everywhere, no parameteric polymorphism, no sums, products for errors instead of sums, ...) have just lost interest in explaining over and over again why a new language without those features belongs in the 1970's or 1980's, and not modern times.

The crowd that remains is mostly composed of people who have never used an ML-style language (Haskell, OCaml, F#, SML, ..) and come from a background of C, Java and Python. Go is definitely an improvement over these languages in many areas.


As a pragmatist and someone who enjoys Haskell and F#, I find your comments very academic. There will always be less popular, feature packed fun languages like Rust, Haskell and OCaml, and there will always be teams that use them to great advantage (Jane Street). But IMHO, lots of these systems are a little creaky in the support structures (cabal for example) and deploy features.

That said, I think a lot of what makes Go great is because of its simplicity, lack of surprises, and general lack of cleverness. You can get your hands around the language features very easily, in mere hours.

Beyond that, I think the ease of building tooling on top of the AST (or in general), the ease of deploying code to production, the build speeds, the inclusion of go get and fmt, the policies around imports and variable use (use or lose) all add up to be more than the sum of its parts. It is very obviously built by a team looking to use it in production, on real projects, as soon as possible.


> But IMHO, lots of these systems are a little creaky in the support structures (cabal for example) and deploy features

Cabal used to be horrendous. Now it's just mediocre, and better than the distribution tools I've seen in the Python world (e.g: easy_install). Much better than the lack of distribution tools in the C and C++ world.

> That said, I think a lot of what makes Go great is because of its simplicity, lack of surprises, and general lack of cleverness. You can get your hands around the language features very easily, in mere hours

nil everywhere is an easy to explain language feature. It might be easier to explain than pattern matching. It's definitely worse though.

I'm not saying that Go has no strong points in its favor.

It has plenty surrounding it that you mention.

It's just sad that this is all around such a poor language.


The horror of python dependencies, build and deploy (I currently am working on a 250k+ LOC python project) is part of what made me really love what is around the edges of Go.

There is a lot of "worse" in Go. It seems the "worse" choices were for 3 reasons: (1) make the language simpler, and (2) make the compiler / tooling simpler to build or (3) ship a production usable product in X time. I think all might yield benefits in the long term.

Getting angry about a "poor language" is pointless, there are hundreds. It seems people get more angry that it might get popular more than their favorite language. For a bit, I was a bit of a stick in the mud about Go because I was hoping Erlang (maybe with Elixir) would take off. After that I was worried it might get popular before Rust had a chance to get off the ground.

But, after having spent some time with it -- I have grown to like its odd, pragmatic mix of tooling and features.


I didn't say I was angry. I said a new language that repeats past mistakes again is sad.

The authors of Eiffel went ahead and paid a very dear price to fix their nullability-everywhere. The C# author says if he could go back in time and fix one design mistake in the language, it would be the nullability-everywhere mistake. That it is responsible for a huge percentage of field issues with C#.

Yet Go was designed after these, and still put in nullability everywhere.

Seeing society pour tonnes of resources into a bad language, when we could all have benefited from these resources being poured into a good one is sad. Having a new language with new approaches and ideas is great. But one that we already know will create poorer quality software, not so great.

What a huge waste of talent and resources we're seeing here.

P.S: Cabal is not really so bad anymore.

My experience when using "cabal install <pkgname>" is generally 90% success, and 9% failures that I can fix by a simple "cabal unpack" on some overly restrictive package to fix its version constraints and carry on.


Fixing the "null" issue isn't straight-forward. If you eliminate it completely, then you need to add sum types to the language. If you provide nullable pointers in addition to non-nullable pointers, then you're adding complexity to the language by having two different kinds of pointers. It's a trade off. The Go designers chose a different side of that trade off than you would have. That's perfectly reasonable.

I don't think you appreciate arguments for pragmatism. I write plenty of C and Go code. I rarely run into null pointer errors in Go, while I run into plenty of them in C. It's possible to mitigate the Billion Dollar Mistake without encoding it into the type system.

This null issue has come up plenty of times on the Go mailing list. Search for the billion dollar mistake, and you should find some responses from the Go devs.

> But one that we already know will create poorer quality software

We do? I think you meant, "I believe it will create ...".


Adding sum types to the language is not a bad thing. In fact, it's a great thing, even outside of nullability! It would also make error handling much neater--and not a special case--in Go. Sum types also have a natural symmetry with structs, and in language design (just like in physics), I figure symmetry means you're on the right track.

Of course, to make both of these reasonable, you would also have to add parametric polymorphism. And while you're at it, you may as well throw in full type inference. (I mean, why not?)

These aren't gigantic changes, and it's already well-understood how to implement all these efficiently--OCaml has it all, along with a fast compiler that outputs fast code.

But Go doesn't. I really wish Go had taken more (i.e. some) inspiration from OCaml.

Also, ignoring the merits of this particular case, I don't agree that a decision is reasonable just because it's the "different side" of a tradeoff. Virtually any choices can be recast as tradeoffs, but there are still wrong decisions to be made!


I know all about sum types. I love them. My central point was to show that removing nullability is a trade off rather than a freebie. I thought the context of the discussion made that clear. I feel that you did not address my point other than to say, "well yeah, but it's wrong."

> These aren't gigantic changes

Then we have a fundamental disagreement. Adding sum types and parametric polymorphism would drastically change the language. I am not claiming that the change would be for better or worse.

> Also, ignoring the merits of this particular case, I don't agree that a decision is reasonable just because it's the "different side" of a tradeoff. Virtually any choices can be recast as tradeoffs, but there are still wrong decisions to be made!

Could you point me to authoritative sources that state how languages should be designed?

Going by the character of your post, it seems like you don't care much for the cohesion of a language design. The Go devs care about this, a lot. So saying "we should add feature X, and while we're at it, Y too" without stating how those features will integrate with the rest of the language just isn't going to cut it for people who care about the totality of a language design.


>I didn't say I was angry. I said a new language that repeats past mistakes again is sad.

You do realize that these so-called language design mistakes seem to have no correlation on how successful a language will become?

>when we could all have benefited from these resources being poured into a good one is sad.

A language that nobody wants to learn is not a good language.


I pick on Cabal because when I started Haskell a few years ago, it cost me days and days... and almost made me hate Haskell. I learned to get past it and still <3 Haskell. But it isn't just Haskell, it is lots of less popular languages and toolkits, they have significant "offramps" via deploy problems, tooling problems, build problems (but not pure language problems, the core language might be gorgeous).

That said, the ability to build a static binary quickly and scp to a server is ... amazing. The convention (not forced) of localizing what you depend on in your /src and making hermetic commits is amazing in practice. It means that I simply git clone FOO && go build BAR and it builds and spits out a static binary I can ./bin/BAR


1. Parametric polymorphism may still come. I think we need to understand their connection with interfaces and builtins before it will be clear what to do. A thousand frameworks may need to bloom before we have enough data to do that.

2. Nullability IS a kind of sum... and the idea of a zero value would be impossible (or at least ruinously inefficient) without nullability. Secondly, the fact that all dereferences are checked gives you most of the same benefits, just minus all the type noise.

3. Products for errors are actually a GOOD thing, in that they allow developers to easily choose between pedantic error handling or relying on downstream panicing to handle situations that are unrecoverable anyway. Whereas with type-level enforcement, you HAVE to do all that extra checking, and that straitjacket costs you clarity and elegance.

4. Interfaces do allow for sum types, and type switch statements and/or dispatch allow for pattern matching. Not terribly elegant, but effective and understandable.

My closing thought: type systems are crutches for reasoning about the behavior of programs at compile time... and we need all the help we can get with that challenge, so yay for type systems!

At some point, however, they become cages, because there will always be a frontier beyond which they will be unable to encode contraints that are obvious to the person writing the program, and at that point, the resulting contortions will obscure, not enlighten.


2. Nullability is an implicit sum you have everywhere. One that is not checked for exhaustiveness by the compiler.

What do you mean "a zero value would be impossible"? The whole idea is that sometimes you need it, and sometimes you don't, so you want your language to let you distinguish these 2 cases and then check for your program's exhaustiveness in checking it when you do have a zero value.

The fact dereferences are checked converts one runtime error into another. Proper sum types convert a runtime error into a compile-time error. How do you possibly view this as "most of the same benefits"?

3. No, they allow you to use the wrong value when there actually was an error and the value has a different meaning than intended (or just another nil dereference).

The type system forces you to have either error checking or you can use explicit type system escape hatches which are then greppable. For example, Haskell does have unsafe functions like "fromJust" which unsafely assume a result value is not an error -- but these functions are rightly considered a code smell. If you want to use them, you can, at least making the smelly parts of the code easily findable. In Go, all the code is smelly, instead.

4. Type switch statements are so clunky I don't think they can, with a straight face, be considered "effective" alternatives for pattern matching.

Consider:

  myFunction Nothing = ...
  myFunction (Just x) = ...
This is understandable. This is effective. Do you think the equivalent type switch in Go is?

Type systems don't become cages and aren't "crutches". They are tools that you can use as much as you want. A program in Haskell can be written to be type-unsafe and put all the code in useless types. Or it can encode any amount from very little to a lot in the type system. It's just a tool you get to choose how much of your invariants you want to verify.

Go doesn't offer you that tool.


2. Re zero value: If pointers were by default not nullable, then the zero value of a struct containing pointers would have to recursively allocate objects to fill those pointers. That's a deal-breaker for the notion of zero values.

3. Null values are almost always what result, as opposed to undefined or incorrect values. So it's a product type that is effectively a sum type. As for code smells: actually, there is a greppable equivalent in Go, and it is "... , _ := ... ". Future Go compilers might allow for a flag that treats "error" types specially in this regard.

Interesting to hear about fromJust, didn't know about it.

My own experience is that unchecked errors aren't where Go's 'debugging load' lies. The load is often in reasoning about concurrence, causal dependancies, and state machines.

4. If you're saying that because pattern matching isn't a core feature of Go, you wouldn't use it for everyday code, you're right. I'm a fan of pattern matching, I use it in Mathematica all the time (albiet unchecked). But my point stands: you can build sum types through interfaces, and when that makes a design significantly cleaner, a decent Go programmer will use it.

5. "Type systems don't become cages": well, a lot of people will disagree with you there. Ask C++ programmers about const. Ask Java programmers about checked exceptions. Ask Haskell programmers about logging. So I guess that's just, like, your opinion, man. :)

5.1. Go may not offer you as rich type verification as Haskell, but it does things Haskell can't: it can detect race conditions, for example, in concurrent code [I shudder to think how you even write concurrent code in Haskell].

Another closing thought: you focus a lot on Haskell. Haskell seems to be a very fecund place to devise type-theoretic patterns and techniques. And it seems to have taken the niche of "programming-language research testbed", which also results in a lot of interesting ideas. But there isn't nearly as much evidence that it is a smart choice for production code.

I've been impressed by what I've seen, read, and understood about Haskell. Still, I have an unshakable impression that a lot of the quality that is associated with Haskell code is explained better by the hypothesis that the bar for writing Haskell code is very high: bad programmers can't write Haskell programs that do the things they want to do, so we see a lot of nice Haskell code and little bad Haskell code because of the base competency of the audience.

But that doesn't imply that an experienced programmer would write higher quality code in Haskell than in Clojure or Go. We all have a complexity budget to spend on using tools to their full potential and doing things a certain 'proper' way. Simple languages leave more of that budget to spend on your program itself. Type-heavy languages leave less, with the promise that the cost is amortized. Calling Go a "sad language" and implying its authors are incompetent because you don't agree with their particular choice of budget demonstrates, to me, a certain arrogance and lack of imagination.


2. Why does the struct need a zero value? If it does, why not wrap it in an option/Maybe?

3. So that means you might get both a success and error value at the same time? And ignore one of them?

As for concurrent problems being worse than nullability, that sounds very plausible -- as Go also messed up concurrency by making any guarantee about immutability of shared state impossible.

4. That means the threshold for using sum types is pretty high -- which means you end up not using them when you ought to (e.g: error products rather than sums).

Sum types would make error handling significantly cleaner, but are not used that way in Go.

5. I think C++ programmers rather like const. Maybe you are confused with C programmers? C just got const all wrong. Java got checked exceptions wrong, too.

Haskell logging is not a serious problem in my experience. Debug logging is done the same impure way as in other languages. Production logging is easy to add as a feature to your monad stack.

Some bad type systems become cages -- and then people hate all type systems. Ironically, Go also has a type system, and since it is a rather poor one, it is a cage as well.

5.1: I'm sorry but this point of yours shows rather extreme ignorance about Haskell. Haskell can statically prevent race conditions. The facilities for writing concurrent code in Haskell are far more advanced than those of Go.

> But there isn't nearly as much evidence that it is a smart choice for production code.

I use a bunch of languages for production code, including Haskell. In my experience, Haskell is the best choice for production code of most kinds.

There is likely a selection bias in Haskell. That isn't why Haskell code tends to be good. As an example of a measure of quality: Haskell makes it easy to verify there are no runtime crashes in your program. Even excellent programmers will not be able to make that guarantee in Python, Java or Go. This is a pretty huge thing -- having a guarantee of no runtime crashes. The same Haskellers who can give this guarantee for Haskell code would have no hope of giving this guarantee in these other languages. I'm not talking about a big "try: except:" clause -- but about a compiler exhaustiveness check that verifies all functions are total, and a cursory check that all recursions are well-founded.


That's selection bias in action. Many people with criticism about Go probably moved on.


To D and Rust, speaking about myself.


My biggest issues with Go are compatibilities with the C ABI and the lack of shared library support. I have not looked in a while, so I may be wrong, but here are my big questions: 1. Can C invoke a Go-compiled and exported callback/function? 2. Can I build a Go .so/.dll invokable via C or other non-Go FFI methods? 3. If I build a commercial Go lib for others to link with, how can I distribute it without distributing the Go source?


I suspect unless someone gets real exciting about this and modifies one of the toolchains, this won't happen in the near future. You can do (1) with some ugly syntax. Regarding (2) and (3) -- I know some projects are happening in both those spaces, but haven't kept up with them.


As far as I know, you can call from C(++) into go, as long as the outermost layer (main) is go.


In fact all three of those things are being worked on, or planned for the medium term.


For 3. couldn't you just "compile but not link" your library packages and distribute the binary (non-source) .a object files for all platforms? Other Go code can then fully import and link those afaik.


Everybody compares Go and Python. How is it so? Go compiles to a binary, implements static typing, got rid of classes, got rid of exceptions, and added lots of special purpose keywords on top of reinventing C's syntax. The only similarity I see is the "import" statement.


They both approximately match up on mental effort /friction/hassle for me, which largely seems to be coincidence since they do have dissimilar feature sets. For example, it use to be that I would use Python when I thought that C (my language of choice) was going to be too much of a hassle; now I use Go when I think that C will be too much hassle.

I think this accounts for a large portion of the Go<->Python talk you see.


When app(lication)s meant desktop apps, most of us used C as an app dev language. Over the years, the platform changed and most app developers switched to Java and C# and, more recently, to Python and Ruby. Those who still use C these days mostly use it to deal with unusually demanding constraints (performance, memory, etc.) They use C when only C will do, which usually is not for apps. C is not the default app programming language; it's a special-purpose language of sorts.

Go is not very competitive with C at what people mostly use C for these days: tightest constraints and maximum customization. Go is very competitive with Python (& Ruby), though, as a high-productivity app development language, being almost as productive as those two but with much higher performance, lower memory requirements, and no need to install a big runtime. It's biggest shortcoming, vis-a-vis Py & Ruby, is the immaturity of its environment (libraries, toolchains, web frameworks, etc) due to its relative newness. As that changes, many will switch from P&R to Go, but fewer will switch from C to Go, because most who don't absolutely need C have already switched away from it.


What I always find interesting is that the Python and Ruby developers are willing to loose abstraction power to delve down in Go, while they would be able to keep it if they would move to PyPy, JVM/.NET based languages or FP languages with native compilers instead.


Yes, but in return they get something more mature than PyPy, something with less hassle that bringing along a VM (JVM/.NET), and something less alien and marginal than FP languages with native compilers.

They might not get the best technical experience, but they DO get:

1) mostly imperative (and most LIKE it that way), 2) nice concurrency support, 3) a lot of niceties (first class functions, implicit interfaces, maps, etc), 4) quite full batteries included, 5) lots of other kids using it 6) regular success posts on HN 7) nice, and mostly predictable, reasoning about speed and memory


> Yes, but in return they get something more mature than PyPy, something with less hassle that bringing along a VM (JVM/.NET), and something less alien and marginal than FP languages with native compilers.

There are commercial native compilers for Java and .NET if you are willing to pay for them. Language != Implementation

As for what one gets,

1) Offered by C++, JVM, .NET languages, D, Rust, ...

2) Scala, Haskell, OCaml, F#, D, Rust, Erlang, ....

3) Almost any modern language with exception of C and Java.

4) Java and .NET have more batteries

5) Yes, because big boys have better tools

6) Fortune 500 companies don't use languages based on HN posts

7) Tooling still behind what big boys use for other languages


>There are commercial native compilers for Java and .NET if you are willing to pay for them. Language != Implementation

For C or C++ and a few others, maybe. For most other languages, language and implementation are very much tied, for practical reasons (size of community, maturity, degree of compatibility, etc etc).

As for using some Java/.NET "commercial native compiler" is not a real (or desired) option for most people/companies. For one, it delves into non standard waters, and can bring obscure bugs, restrictions (e.g reflection related), etc. Second, you have to pay. Third, it's one more thing to pile on top of your language choice. Where my argument was that Go gives some people using it LESS things to worry about.

For the rest of your list: the point was that they get ALL of them from Go at the same time. Being able to get one or another feature from this or that language is not comparable to that.

I also don't understand why you bring Rust into this. Rust is not production ready -- even the project leaders advise AGAINST using it for anything production related. It's also in flux, and the syntax is still changing. So, nice language as it shapes to be, isn't it obvious that it's not in any way an alternative to Go for at least one more year?


> For the rest of your list: the point was that they get ALL of them from Go at the same time. Being able to get one or another feature from this or that language is not comparable to that.

I just wanted to enumerate a few languages where those features are present.

If you feel like, I can present an extensive list of every Go feature and which languages offer similar support.

But what would be the point besides fueling a flamewar?

I jumped into Go at the begging, because I was looking for something with the features of today's mainstream languages and the language's Oberon influence interested me. Given the time I spent with Native Oberon back in the 90's.

In the end I became disappointed as the language is not much more than Limbo (1995) reborn.

> I also don't understand why you bring Rust into this. Rust is not production ready

So what, Go also wasn't when I was using it, and it did not prevent companies like Canonical to use the language in production.

I just get the feeling if Go authors weren't working at Google, the language wouldn't be given front page presence on HN every day, given its design.

On the other hand, I wish Go becomes success as it might help decrease even more the use cases where C is still relevant.


>So what, Go also wasn't when I was using it, and it did not prevent companies like Canonical to use the language in production.

Go was far more production ready even when it first appeared (after internal development). For Rust, we are at the stage or internal development at this point, only it happens publicly. So the two are in no way comparable in that respect -- and that's why Canonical had no problem using it in production.

>I just get the feeling if Go authors weren't working at Google, the language wouldn't be given front page presence on HN every day, given its design.

Yes, but they are and so it is. Which reminds of a Jimmy Carr joke, about his girlfriend.

"People would say to me: she's only with you cause you're famous. And I'd tell them, well, I AM famous, so what's your point?".

So, even if people are using Go because of Google, well, it IS Google that is behind the language, so this also helps it.


You kinda made his point for him didn't you? He said all these are benefits of Go, and you pointed out he would need to use lots of different languages to get them elsewhere.

Notice the lack of overlap in your own points(1-4), that right there is the crux of the issue. Beyond that, on (5) I simply respond... what? I think you are flat out wrong about (6) and how they pick languages (I contract for them), they "hear" node.js is super fast and will double output and want to use it. (7) is part of what Go was designed to enable, building tools for it is AWESOME (Go AST!).

If anything you made a killer post, maybe just not for the reasons you think.


> You kinda made his point for him didn't you? He said all these are benefits of Go, and you pointed out he would need to use lots of different languages to get them elsewhere.

Not really, I just found out it was easier to reply to his bullet points like that using languages that are known for certain features.

Actually most of those languages cover all Go features.

Anyone with background in compiler design can easily provide a paper like article that picks up every single feature and describes which languages offered them initially and their evolution across programming languages.

However something like that would only contribute for flame war discussions without any productive result.

I also work for Fortune 500 companies with multi-site offshoring projects, so I do have some experience on that world.

I was attracted to Go, because it is a Google language, but I got disappointed with its features, after using the language during 2010-11 timeframe.

It remains to be seen how the language will evolve in the marketplace, but would HN care if it wasn't a Google language?


I fear I was unclear; I was not advocating using Go in the place of C.

Rather I now using Go in the place of Python, in situations where C was not appropriate (traditionally my hobby projects have been something C is appropriate for, or something that Python is appropriate for).


No, you were clear, and I seem to be thinking the same way your are. Mine was a more general comment on the common observation that Go is pulling more people away from Python than away from C. App developers who don't need the bit-by-bit customization power of C have already left it for more productive languages such as Python. Those who continue to use C are those who really need the bit-by-bit customization that C provides but Go and Python don't. Those who already left C for the huge productivity boost offered by Python can get back a lot of the performance and size advantages they left behind without giving up much of the productivity by switching to Go. That's an attractive option.


Keyword arguments with default values is the one Python feature that makes my Go code more verbose than my Python code.

  def f(a=11, b="some default", c=55.5, d=D()):
    #...

  f(c=44.4)
becomes

  type Config struct {
    a int
    b string
    c float
    d interface{}
  }

  func f(c *Config) {
    a := 11
    if c.a != 0 {
      a = c.a
    }
    b := "some default"
    if c.b != "" {
      b := c.b
    }
    c : = 55.5
    if c.c != 0.0 {
      c := c.c
    }
    d := new(D)
    if c.d != nil {
      d := c.d
    }
    //...
  }

  func main() {
    f(&Config{c: 44.4})
  }


I just solved a similar problem in a slower but fairly general way. It's not appropriate everywhere, but it works well for the use-case I built it for.

I can't share the actual code, unfortunately, but by using struct field tags, reflect, and json, you can end up with code that looks something like this:

    type FArgs struct {
        A int `def:"11"`
        B string `def:"\"some default\""`
    }
    func f(margs map[string]interface{}) {
        var args FArgs
        SetupArgs(margs, &args)
        // ...
    }
If f() is being called in a tight loop, your overhead can be enormous, but outside performance-critical code, it's probably good enough.


You're not just losing performance but also static type checking.


Not really. If the passed-in or default value doesn't convert to the field's type, SetupArgs panics.

Edit: Variations on this that offer compile-time type checking of passed-in values are possible, too. My use case wouldn't work well with that, though, because the function needs to be called by other functions that would have no clue about the FArgs struct.

You could instead do a version that just swaps in a default for any nil fields.

And in any case, you've got a lot more safety than the original Python code, no?


"And in any case, you've got a lot more safety than the original Python code, no?"

Only insofar as your approach puts the values back into a struct that can subsequently be used in a type safe manner. Arguably that's better than having no type safety at all.

But I think my main gripe with your design is that it is not just less type safe and slower, it also comes with more mental friction and verbosity than what Python does.

Granted, it's a lot less verbose than my Go code, which makes it a good solution in some scenarios. But in my view it's not a good general replacement for default keyword args and you didn't claim it was.


I certainly wasn't arguing it was a perfect substitute for default args, just that it was easier to manage than the example you gave.

The application I'm currently working on is mostly a direct port of Python code to Go. Python code I originally wrote. You needn't tell me it fails to accomplish all the things Python's default args do.


i usually do something like this instead:

  type Config struct {
    a int
    b string
    c float
    d interface{}
  }

  var defaultConfig = Config{
     a: 11,
     b: "some default",
     c: 55.5,
     d: new(D),
  }

  func f(cfg Config) {
      // ...
  }

  func main() {
     cfg := defaultConfig
     cfg.c = 44.4
     f(cfg)
  }
having the "keyword" arguments as a separate type makes them potentially useful as a currency to pass to other functions too, rather than as a set of attributes and values defined for one function only.


That code is broken; the observation that a field has its default value does not imply that no value was set.

Test case:

  func main() {
    f(&Config{a: 0, b:"", c:0.0, d:nil})
  }
That calls f(11,"some default",55.5,D())

=> It is even harder to replicate that Python idiom (I wouldn't know how, but I have only glanced at the go language spec. It would help if you could auto-initialize structure members with values only known the the structure itself, or if one could replace that &Config above by a function call, like this: makeConfig(){c: 44.4})


"That code is broken; the observation that a field has its default value does not imply that no value was set"

That doesn't necessarily mean the code is broken. It may or may not be possible in a particular situation to treat the zero value as semantically special as I have done. But if it's not, then you are of course right that this creates an additional problem.

Using a seperate function to initialize a Config is fine, but you would actually have to create a makeConfigForF function because the defaults are specific to f and not to Config. It's all messy. That's why I like keyword default arguments.


> and added lots of special purpose keywords on top of reinventing C's syntax

Added? I think Go has less keywords than C in total.


Go has interfaces that you implement just by having the methods, without declaring that you're implementing the interface (e.g. "implements" in Java). This kind of feels like duck typing in python. For example, in python lots of APIs work with file-like objects, e.g. anything with a read method -- you can do the same sort of thing in Go.


Structural typing is nothing like duck typing. People saying that don't know what they are talking about.

Yes, both features allow one to escape the manifest typing present in languages like Java. But duck typing is so much more, as duck typing allows you to make shit up at runtime (e.g. come up with new interfaces based on the data you have).

By definition a static type system will reject pieces of computation that are correct when described in a dynamic type system. Also Go's static type system isn't even a good static type system, as it makes it next to impossible to work with monads or Haskell's Maybe. And serving as proof that Go's type system is weak, consider how these are non-issues in Haskell or Scala or dynamic languages such as Ruby or Clojure (yes, Clojure developers are know to use monads when it makes sense to do so).

Go's type system is a rather poor implementation of features that were properly implemented in other languages, such as Ocaml or Haskell. It's a shame really that such an awful implementation gained so much popularity on account of Google, but on the other hand I view it as a fad that will pass like all the rest.


    > Structural typing is nothing like duck typing. People 
    > saying that don't know what they are talking about.
Structural typing is at least facially similar to duck typing. People contesting that are being pedants.


Structural typing is facially similar to duck typing when anonymous types (interfaces and structures) are allowed, as in OCaml. I don't believe either exists in Go.


They are absolutely allowed. [1] You can have anonymous structs in a similar fashion.

[1] - http://play.golang.org/p/tHYLcdbPSS


I'll bite. What are useful uses of monads, other than implementing exceptions, async, state and IO? Granted, it's aestethically pleasing to have a really tiny core on which to layer more sugar, but sometimes using the language as-is suffices.

The mother of all monads is Cont (http://blog.sigfpe.com/2008/12/mother-of-all-monads.html), but that belongs in the bowels of the compiler.


Continuations are useful monads too (for implementing coroutines or control features like Python's "with", etc).

As are parsers, uniqueness, randomness, non-determinism, readers, writers, regions, resource management, local-state threads (ST monad), probability distributions, software transactional memory, ....

Another nice thing is composing various monads to build a custom one for the purpose you need.

For example:

  StateT S1 (EitherT Err (State S2)) a
Will be a stateful computation that always has yields a new S2 value, even in exceptions (preserving any modifications up to the exception) but only yields an S1 in the case of success.

This is just one of infinite possible compositions of monad transformers.

If you hard-code a certain ambient monad into your language, you won't be able to use monads as a DSL for the use case you have at a certain time.

For example, in the application I'm developing now, I use a "Transaction" monad that guarantees that my key/value store transactions cannot do any IO or anything other than read keys and write keys. As a bonus, that means I don't even need transactionality from the underlying data store -- I can "revert" or "commit" by implementing the Transaction monad as a state tracker of all the changes on top of a hidden IO layer that exclusively allows reading keys.

This also means I can implement useful transactional primitives such as "forkScratch" which allows me to fork the transactional state, run an action in that forked state, discard its transactional side effects, and keep its result. I use this to "try out" a transaction, see if it would work (in my context, type-check the new state and discard it) without actually having any effect.

tl;dr: There isn't a finite set of useful monads you can bake into the language. There are infinite compositions of useful monads.


The point is to not (re)invent little languages all the time, but to use the actual language. For coroutines, see https://sites.google.com/site/gopatterns/concurrency/corouti....

The parser combinator is also interesting. A parser combinator is merely a function that takes an input pointer and produces a list of matches. One can implement this in Go (or any programming language I know of) without agonizing over the syntax used to chain combinators.

Essentially, the type machinery necessary to implement and use monads is a tradeoff. One can implement the code in pretty much any language one wants. The only point in contention is how many invariants are enforced by the type system vs. how much type system wrestling one has to deal with. I lack the empirical evidence that, for example, "monad that guarantees that my key/value store transactions cannot do any IO" is any better than a comment "// Transactions never do IO. To perform perform IO use package TIOBridge.". But I have a nagging feeling that adding logging or exceptions in this context is quite a bit more convoluted than "import log; log.info(...)" or "panic(...)".


Essentially, you're saying that the point is not to use DSL's. The benefits of DSL's are explained everywhere, so I don't think I need to repeat them here...

Parser combinators are not what you said they are -- and you won't be able to write the kinds of things you can in Haskell. e.g, to parse 10 comma-followed-by-int:

  replicateM 10 (parseComma >> parseInt)
How would you implement that in Go? Even if you violate DRY and re-implement replicateM in every monadic context I don't think Go will be able to encode anything similar in power to parser combinators.

> But I have a nagging feeling that adding logging or exceptions in this context is quite a bit more convoluted

To add exceptions, I use an EitherT transformation on my Transaction monad.

To do a debug log, you just insert an impure debug log as you would in a non-pure language.

To insert an actual production log you would use a Writer that accumulates the logs in order to eventually write them outside the transaction context. Otherwise, aborted transactions will also have the production logs. If you want these semantics, add logging capabilities to the transaction monad. Otherwise, you get nice guarantees about what can't happen.


But why the hell get into all that? We've got a job to do.

Your comment here is a great example of why people don't bother giving Haskell the time of day. I've already got business problems and performance problems, why give myself type system problems too? You're talking about adding on all these layers of complexity and abstraction, and the benefit is more "pureness". What do I care about pureness? I'm writing business code, or unix code, it's not going to be pure either way.

You'll claim that the type system makes all of the business problems just go away magically because your type system has reached a skynet level of self-awareness, but we both know you're gonna be debugging the same crap at the end of the day, except now you have 12 different monads, type constraints and a homegrown DSL in between you and the problem.

I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell". It just makes me "more productive than if I were working in haskell".


> why give myself type system problems too?

Exactly! Why use Go and have type system problems? Chase nil bugs in the middle of the night, when my type system could have caught them all for virtually no cost at all?

Why use Go and have type system problems like lack of sum types and pattern matching, having to waste my time emulating them with enumerated tags or clunky type switches?

What you're calling "layers of complexity and abstraction" are just "layers of abstraction" -- Haskell code to solve a problem tends to be simpler than Go code to solve the same problem. By simplicity, I'm talking about mathematical simplicity here. Not ease of learning. Simplicity is hard. But it pays.

I don't claim that the type system makes all problems go away magically, but it helps catch a far greater chunk of the errors.

> we both know you're gonna be debugging the same crap at the end of the day

Actually, no. If you've actually used Haskell, you'd know that debugging runtime problems is a much much more rare phenomenon. It happens, but it's pretty rare.

I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away.

> I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell"

Who claimed you're "too dumb to understand Haskell"? If you're smart enough to write working Go programs, you're most likely smart enough to learn Haskell. But learning Haskell means learning a whole bunch of new useful techniques for reliable programming, and that isn't easy.

People who come to learn Haskell and expect it to be a new front on the same concepts they already know (e.g: like Go is) are surprised by how difficult it is -- because it isn't just a new front. There are a whole set of new concepts to learn. This set isn't really larger than the set of concepts you already know from imperative programming, but the overlap is small, and you forget just how involved what you already know is.


Hey, this is a pretty late response but regarding:

"I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away."

I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.


> I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

This is simply not true. I don't only work in Haskell. I also work with many colleagues on C and on Python.

Virtually every bug in C or Python that we encounter, including ones we have to spend a significant amount of time debugging is a bug that cannot happen in the presence of Haskell's type system.

> I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

If you had experience with advanced type systems -- your claims here would carry more weight. People who don't know advanced type systems tend to massively understate their assurance power. For example 2 different communicating services might use "session types" to verify that their protocol guarantees invariants. Or maybe the only type-checked programs are ones forced to reject invalid inputs that violate invariants.

> I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.

They aren't the "easiest bugs" at all.

For example, consider implementing a Red Black Tree.

In Go, imagine you had a bug where you rotate the tree incorrectly, such that you end up with a tree of the wrong depth -- surely you would have considered this a "logic" bug. One of the harder bugs that you wouldn't expect to catch with a mere type system.

In Haskell, I can write this (lines 27-37):

https://github.com/yairchu/red-black-tree/blob/master/RedBla...

to specify my red black tree, with type-level enforcement of all of the invariants of the tree.

With just these 10 lines, I get a compile-time guarantee that the ~120 lines implementing the tree operations will never violate the tree's invariants.

This logic bug simply cannot happen.

Learning a bunch of techniques is a one-time investment. After that, you will build better software for the rest of your career. Debugging far fewer bugs for the rest of your career. How could you possibly reject this trade-off, unless you expect a very short programming career?


I meant less interesting logic bugs, like "Oh we never considered the intersection of these 3 different business use cases".

I could see a couple ways where the type system could be more powerful than unit tests, but only to the extent that your unit tests didn't cover some obvious cases to begin with. Why not just write unit tests?

As for how I could possibly reject the trade-off... I mean, nobody's gonna hire me to code Haskell and my side projects are too systemy and not lispy enough to even consider it.

Thanks for the code sample though, I plan on looking at this more later tonight and getting a feel for it (barely glanced just now).


> Oh we never considered the intersection of these 3 different business use cases

Take a look at the history of any repository near you, for a project that uses C, Python or Java.

Review bug fix commits. See how many of them relate to "business use cases" and how many relate to implementation bugs. I believe you'll find the latter is far more common.

Even in the "business use cases", enforced invariants will be a tramendous help. In the infinite possibilities of all the use cases, none will be able to break any invariant forced by the type system.

When you set out to prove a property of your program, you will end up finding bugs, almost regardless of what properties you are trying to prove.

Writing unit tests is also useful. But if I can have 10 lines in my Red Black Tree that mean I don't have to write any test whatsoever for the tree's invariants -- I saved myself from a whole lot of work writing and later maintaining tests with every change.

Generally, to get similar confidence levels from unit tests as you get from types, you'll need to write many more tests. If I had to choose whether to trust a well-typed system written in Agda (which is similar to Haskell but has an even more powerful type system) with only the most trivial testing done, or trust a highly tested system written in dynamic or a much weaker type system, I'd definitely trust Agda more.

Or if I were to trust my 10 lines of type code or hundreds of lines of tests for the invariants of the tree, of course the 10 lines of code are far more reliable and easier to maintain.


Honestly, in my case, for my day job, it's way, way more "business use case" or more frequently "misunderstanding between 2 services" than it is an implementation error. We catch 90% of implementation problems with unit testing and/or just plain sanity checking it before release. Maybe Haskell could help us by making unit testing easier/unnecessary but of course there's no switching at this point.

We're probably a bit of an edge case being a very service oriented architecture. (1,000 servers, 6-7 major classes of server, handling 10B (yes B) requests a day). Most of our bugs consist of a flawed assumption that crosses 3-4 service boundaries on it's way off the rails. I'll admit I'm ignorant of Haskell but I just don't see a type system fixing that for us.


Did you actually look at commits to reach this conclusion?

Also, do you commit only/amend after doing extensive testing? Or do you also commit the results of debug sessions as separate commits?

People have various biases that make them tend to remember some things and forget others. It is easy to have 100 boring implementation bugs, 1 interesting bug, and then end up remembering that you have more interesting bugs.

Also, can you give me a couple of examples of "flawed assumptions" across services?


Well, for example, yesterday I was doing some UI work on a project I'm totally unfamiliar with. My bugs were caused by bad SQL, a null pointer exception, and some JS silliness. Most of my time was sucked up in figuring out a requirement. The NPE took about 10 minutes out of my day, and the fix for it never made it into a commit message because it was, write some code, run it, oh shit forgot to initialize that, fixed.

Flawed assumptions across services tend to have to do with rate limiting, configuration mismatches, what happens when one class of service falls behind and queues fill up, stuff like that.


What kind of bad SQL? Most forms of bad SQL can be ruled out by a well typed DSL. JS silliness is also a type safety issue. You can use Fay, Elm, GHCJs or Roy to generate Javascript from a type-safe language.

Some NPE's take just 10 minutes (not negligible) but there are also some that are expensive.

Figuring out the reason is sometimes hard, when the code and its assumptions are badly documented.

Fixing NPE's is sometimes hard because of silly reasons such as touching third party or "frozen" code.

Also, NPE's can become extremely difficult when the code is bad in the first place.

Things like rate limiting and queue lengths can be encoded in types. You can use type tagging on connection sources/destinations to make sure you only hook up things that match rates/etc.


Man, just when I thought we might agree on something.

A DSL? SQL IS a freakin DSL. Why would I put another layer of abstraction between me and it? Just more places for things to go wrong.

http://en.wikipedia.org/wiki/Inner-platform_effect

Anyways that particular problem yesterday wasn't a compilation problem, it was due to my own misunderstanding of some pre-existing data.

You're proposing a more-complicated way of doing things with the idea that eventually we'll get to the promised land and things get simple again. I've just never seen it happen. Seen the opposite plenty of times.


SQL is a DSL indeed, but building SQL strings programmatically is an awful idea. You should build SQL queries structurally. The DSL to build SQL should basically be a mirror of the SQL DSL in the host language.

The DSL will guarantee that you cannot build malformed queries. It can also give guarantees you design it to give.

I'm not talking about replicating a database -- but just wrapping its API nicely in a type-safe manner.

I am proposing wrapping type-unsafe APIs with type-safe API's. This adds complexity in the sense of enlarging the implementation. But it also adds safety and guarantees to all user code.


Hmmm... not quite a flashback. If it were a flashback, you would have written "C" instead of "Haskell", and would have been thinking of assembly language rather than C or C++.

You have the same problems no matter what; the question is whether you want a tool that helps with them, or one that doesn't.


There is one construct for repetition in Go. It's called "for".

    parser = ParserNil;
    for i := 0; i < 10; i++ {
      parser = ParserSeq(parser, ParserSeq(parseComma, parseInt))
    }
PS. I'm a complete Go newbie. Don't take this code as the "one true Go way".


So you're forced to duplicate the monadic combinators (e.g: inlined replicateM here).

Now let's consider:

  myParser = do
    logDate <- date
    char ':'
    logTime <- time
    let fullTime = fromDateTime logDate logTime
    msg <-
      if fullTime < newLogFmtEpoch
      then do
        str <- parseString
        return (toLogMsg str)
      else do
        idx <- parseLogIndex
        getLogParser idx
Translating this to bare-bones Go would require using explicit continuations everywhere. Any combinator you use from Control.Monad is going to be duplicated/inlined in your Go code, violating DRY repeatedly.


> forced to duplicate the monadic combinators (e.g: inlined replicateM here).

??? This is a bare bones for loop. What exactly do you consider "duplication" ???

Regrettably, I can't accept the challenge because I have no idea what the code you pasted is doing and why.


A bare bones for loop is the implementation of replicateM. There are more complex combinators than replicateM, e.g: filterM, zipWithM, and more... Which you would need more than duplicating a bare bones loop at every occurence.

The code I pasted above uses "do" notation to write a monadic parser. The parser parses the format started by a <date>:<time> and if those set a date larger than some point in the past, it parses the continuation of the text differently. Which part of the code are you having difficulty understanding?


    > Essentially, you're saying that the point is not to use 
    > DSL's. The benefits of DSL's are explained everywhere, 
    > so I don't think I need to repeat them here...
DSLs aren't a gimme. In production code, produced and maintained by a team, and iterated through a lifetime of morphing business requirements, DSLs are almost always more of a hindrance than a help, because they impose an additional cognitive burden on their manipulators.

They're lovely to write, and elegant in a closed system, but in The Real World(tm) where we all live and work, we don't generally have the luxury of writing software to solve those classes of problems.


This is called the "Real World" fallacy. Your techniques are unfamiliar to me, and I am in the "Real World", therefore you are an academic who doesn't actually solve real world problems.

DSLs are used in production code, and solve real world problems better than ad-hoc repetative code does.


    > DSLs are used in production code, and solve real world 
    > problems better than ad-hoc repetative code does.
And this is called the "argument by assertion" fallacy.

I'm sure there are a lot of problems where DSLs make sense to use. They're simply not _most_ problems.


Whenever you're creating a function, you're defining a verb in your domain-specific implementation. Whenever you're creating a class / interface / prototype, you're defining a noun in your domain-specific implementation.

To me, the usage of the term itself ("DSL") does not make much sense. Using a combination of public or custom libraries and APIs is in itself a DSL and the combination makes it unique, per application. And when programming, you're extending that language all the time. That's what you're doing with every function, class or interface you add. That's what programming is - specifying to the computer how to do computations by building a language made of nouns and verbs that it can understand and then forming sentences out of those nouns and verbs. And these definitions transcend the actual lines of code, as when you're communicating to your colleagues, in speech or in writing, like in emails or specs, we do need precisely defined words to refer to concepts within your app.

The term DSL in the context of software-development is basically a pleonasm. And discussions on DSLs are actually stupid, as people argue about a non-issue.

The real discussion should be - in what contexts do you really need re-usability and/or composability and/or succinctness? Not always, I'll grant you that.

And here, I think we can learn from mathematics or physics, spanning domains so complex as to be intractable without defining mini-languages to express things efficiently. Speaking of Monads, many people described them in terms of mathematics, like with the infamous "a monad is a monoid in the category of endofunctors". You could say monads are just a design pattern, with some simple properties to grasp and some examples and normal people wouldn't need more to understand their usage, however understanding their mathematical underpinnings, that use big and unfamiliar words that scare us, allows one to grok the notion and build on top of it bigger and better abstractions. And abstractions help us to tackle even more complex problems. Yes, even in the real world.


    > Whenever you're creating a function, you're defining a 
    > verb in your domain-specific implementation. 
Yes. And by virtue of it being a function, i.e. a first-class operator in the language I'm working in, I also know _prima facie_ the semantics, cost, and implication of that verb.

This is critical and necessary knowledge. And it's precisely the knowledge that I _don't_ get (immediately) when I use a DSL. I have to know both the semantics of the verb within the context of the DSL, and the semantics of the DSL (as a whole!) in the context of my programming language.

That additional step is, more often than not, a significant burden. I'm disinclined to bear it, no matter how facially elegant it may make the solution.

    > The real discussion should be - in what contexts do you 
    > really need re-usability and/or composability and/or 
    > succinctness? Not always, I'll grant you that.
This is a disingenuous framing of the problem.


Ah, but internal DSLs do use the same constructs and semantics that the language provides, unless you're talking about macros.

And we aren't talking about macros here, but about monads (a quite reusable design pattern), possibly in combination with the do-notation from Haskell, or for-comprehensions from Scala, or LINQ from .NET ... basically a simple and standardized syntactic sugar to make operations on monads more pleasant to read, but not really required.

A monad is basically a container with certain functions that can operate on it that have certain properties. That's not a DSL. Those are just function calls on a freaking container implementing a design pattern.


I guess what you call an "internal DSL" I call an API.


Monads with do-notation also provide a really powerful basis for creating custom DSLs with very varied semantics, which still allow you to use all the standard library functions for working with monads. I suspect various other abstractions could be similarly powerful with appropriate syntax sugar.


Go has interfaces that you implement just by having the methods, without declaring that you're implementing the interface (e.g. "implements" in Java).

How well does that work with tooling? For example, in Java, I frequently ask my tools, "Give me a list of classes that implement this interface," and I expect a fast and -- most important -- 100% accurate answer.

My instinct is that Go's approach makes it tooling-unfriendly (a good chance of false positives, especially in large systems), but perhaps I'm missing something.


    > in Java, I frequently ask my tools, "Give me a list of 
    > classes that implement this interface,"
You don't generally ask this question when working in Go, because you don't really construct your "type" system in these terms.


Almost, except that you still do have to be conscious of argument and return types. The one thing that still catches me (and I'm a relative newbie in Go, so that's probably why) is attempting to return nil when the return type won't allow it, such as the case when the return type is a string. It feels weird, but at least the compiler catches it so I can fix it and move on.


I've read that people have had success doing line-by-line conversions of Python scripts to Go scripts without too much reworking. I am guessing that it is the similar level of model and library granularity that drives the comparison.


If your Python (or in my case, Ruby) script is highly procedural then the transition over to Go is quite straightforward. The first production Go app I rolled out followed this pattern (previously a small, procedural Ruby script, now a small, procedural Go app) and I'm quite happy with the results. The resource usage is so much better with Go for this case that it really was the perfect case for switching.


I saw on your blog that you've jumped to full time on DNSimple (congrats). Are you using Go app you wrote as part of the DNSimple system?


Yes. We use it for our redirector (which is the project I mentioned that I switched over) and we're using it for a new zone server that we're working on that is used by our new name servers (which are written in Erlang).


I've had success doing that, while never having written in Python.


Because it seems most new Go coders come from Python or Ruby, with limited experience in any other language.

As such you tend to see a comparison with the other language they know, which for many is Python.


I (a go noob) somehow got the impression that you're supposed to version your API when writing go libraries. ie, your library should be github.com/501/foo/v1 rather than github.com/501/foo. Can any go users comment on whether that's expected practice?


You can if you want, and some folk do, but that's really outside the scope of the language itself and much more about your project management, revision control, and so on.


The easiest way is just to keep your master always in a stable, clean state. Tools like git flow help with that.

Besides that, people who criticize go get's behaviour of checking out the latest revision resp. the go1 or go1.1 tag (if available) seem to forget that you're always free to populate your $GOPATH the way you like. You don't need go get for that.


I suspect you got this erroneous impression due branches based on the version of Go they are compatible with.

Many projects maintain branches named for the Go version they are compatibile with, and the 'go get' tool automatically fetches the appropriate branch.


Actually I think I may have first picked it up from this vclock library: http://labix.org/vclock


Erlang does automatically parallelize over multiple cores. By default it will start one Erlang VM thread per core which work together to run the Erlang system. The Erlang VM also does automatic load balancing over the threads and even tries to shut down threads/cores if it detects they are not needed. It is possible to control how many Erlang VM threads you want at start-up and to lock them to cores as well, though the latter is not recommended. But as I said there is no need to do this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: