Hacker News new | past | comments | ask | show | jobs | submit login
Go Bloviations (ridiculousfish.com)
124 points by samps on Sept 25, 2012 | hide | past | favorite | 110 comments



> This seems like needless boilerplate: why not instead simply pass a closure over a channel that the goroutine will execute? I have never seen this technique used in Go, but it seems natural to me. (It's essentially how libdispatch works.)

Passing a closure on a channel is both idiomatic and relatively common in Go code.

This and other comments make me suspect the author could have benefited from spending more time looking at existing Go codebases (the source of the Go stdlib is excellent).

Edit: His (IMHO somewhat strange) complaint that "Channel reads violate the laws of time and space by changing their behavior based on how their return values get used." doesn't apply anymore, now to do async reads from a channel you use select with a default, not the comma-ok idiom.

Also a package including more 'clever' Unicode operations will also be part of Go 1.1, but I have personally never found the existing support lacking, and no, I don't live in an English speaking country.


The space & time thing is a joke. Because we're nerds, we have trouble with clashes between literalism and humor. He is saying, the behavior of a function depending on its return value is like the past (the execution of the function) depending on the future (the storage of the result of the execution of the function).

He's a very smart guy (and the author of one of my all-time favorite Mac apps, Hexfiend) and I'm pretty sure he understands what is actually happening.

I'm not sure that spending a lot of time in Go is going to get him over the "Damnable Use Requirement". It is really, really annoying.


You can't send closures over channels in Go.


You can:

  func main() {
  	x := 4
  	f := func() { fmt.Println(x) }
  	ch := make(chan func(), 1)
  	ch <- f
  	f2 := <-ch
  	f2()
	x = 5
	f2()
  }
http://play.golang.org/p/NK5YLVT4zB


  test := make(chan func (int) int)

  go func() {
    f := <- test 
    fmt.Printf("%d\n", f(10))
  }()

  test <- func(v int) int {
    return v * v
  }


A couple of responses:

- I too dislike the lack of a ternary operator. Python has this problem too (you can create boolean expressions to sorta mimic it but it doesn't tend to be considered "Pythonic"). And brevity is my reason too. I'm sure it's easier to parse without it but it can't be that hard.

- On the "damnable use requirement", I see his point. If anything, it means that Go will be better used with IDEs than text editors that'll do this for you automatically;

- On the "thread safe set", yeah he's Doing It Wrong [tm] (which I think he knows). You use channels to share state in Go rather than creating shared state directly.

- Unbuffered channels seem to be idomatic. Race conditions and deadlocks seem to often be the result of using buffered channels;

- On his channel reads issue ("violating time and space") I disagree: it's good to have blocking and non-blocking channel reads.

I basically agree with his conclusions, particularly in Go feeling like a "modern C", something I desperately hope succeeds.


Go doesn't have non-blocking receives anymore.

  value, ok := <- ch
will block, always. Iff the channel is closed, ok will be false, allowing you to distinguish between a zero value sent on the channel, and the zero value you get back if you try to receive on a closed channel. Receiving from a nil channel always blocks.

Select statements allow you to check if a channel has had a value sent on it.

http://golang.org/ref/spec#Receive_operator

https://groups.google.com/d/msg/golang-nuts/Z63l4LDOlsI/54uT...

http://golang.org/ref/spec#Select_statements


For those who didn't dive into the spec, the non-blocking recieve syntax has changed to use the select statement:

  func f(ch chan int) {
  	select {
  	case v := <-ch:
  		fmt.Println("got", v)
  	default:
  		fmt.Println("did not block")
  	}
  }
http://play.golang.org/p/ql0qSUVXeX


Too much can be made of the idea of sharing state via channels. If it's simpler to express a piece of code using a mutex, use a mutex. A thread-safe set is one such situation. Don't feel bad about doing so; there is nothing non-idiomatic about using locks in Go.

When using channels, the choice between buffered and unbuffered depends on the situation. There are cases where a buffered channel is required to avoid deadlocks. For example, consider the case where you start N goroutines and use a channel to collect the results. If you return before collecting all results, the remaining goroutines will block forever trying to write to an unbuffered channel. Using a buffered channel with size N avoids this possibility.


Agreed, Rob Pike, in his search engine example at Google's I/O 2012 IIRC, made the explicit point that mutexes are there and that channels are for joining large concurrent parts of the program together; guarding a data structure seems too small even though it's a common (toy?) example. http://www.youtube.com/watch?feature=player_embedded&v=f...


> Python has this problem too

Python has had conditional expressions since version 2.5. Dumb example:

    def count(xs, p):
        return sum(1 if p(x) else 0 for x in xs)


Side note: you can simply use sum(p(x)) for x in xs), assuming p is a predicate. It's a neat trick, though a bit slower than your ternary version.


Another way: len(filter(p, xs))


With the sum approach the data set never exists in memory, but is generated as needed; not a big deal for a small set, but it can add up.


And as in python3, filter returns a generator, not a list, the len call will fail altogether.


Yes, I'm aware of that. But my philosophy is to always try to conserve keystrokes wherever possible. Premature optimization is the root of all evil and all that. :) For the same reason I prefer to use dict.items() over dict.iteritems() and range() over xrange() and so on.


This is a simple enough case that I'm not sure it matters, but I think the sum version is simpler to read. Generator expressions are quite powerful if you want to write Python which is more functional-flavored.


I disagree, I think filter/length models the meaning better.


I guess technically True==1 and False==0 in python and it is considered "pythonic" but personally I think it's "ugly".


I wouldn't even say it's Pythonic. As a reader, I would prefer len+filter (or ifilter). That's the most semantically clear.


I also prefer length + filter but while I dislike the style I think it is still considered pythonic.

The pep for adding bools to python: http://www.python.org/dev/peps/pep-0285/

    4) Should we strive to eliminate non-Boolean operations on bools
       in the future, through suitable warnings, so that for example
       True+1 would eventually (in Python 3000) be illegal?

    => No.

       There's a small but vocal minority that would prefer to see
       "textbook" bools that don't support arithmetic operations at
       all, but most reviewers agree with me that bools should always
       allow arithmetic operations.

    6) Should bool inherit from int?

    => Yes.

       In an ideal world, bool might be better implemented as a
       separate integer type that knows how to perform mixed-mode
       arithmetic.  However, inheriting bool from int eases the
       implementation enormously (in part since all C code that calls
       PyInt_Check() will continue to work -- this returns true for
       subclasses of int).  Also, I believe this is right in terms of
       substitutability: code that requires an int can be fed a bool
       and it will behave the same as 0 or 1.  Code that requires a
       bool may not work when it is given an int; for example, 3 & 4
       is 0, but both 3 and 4 are true when considered as truth
       values.

Compatibility

    Because of backwards compatibility, the bool type lacks many
    properties that some would like to see.  For example, arithmetic
    operations with one or two bool arguments is allowed, treating
    False as 0 and True as 1.  Also, a bool may be used as a sequence
    index.

    I don't see this as a problem, and I don't want evolve the
    language in this direction either.  I don't believe that a
    stricter interpretation of "Booleanness" makes the language any
    clearer.


- They copied Pascal-style type declarations (good!) ... but then modified them to omit the colon (bad!).

That one little simple change really makes type declarations less readable, for no apparent benefit. Using a colon makes the type clearly stand out, whereas without one, it sort of gets lost amid the variables.

Pascal-style (Ada, etc, etc):

  var foo, bar : int = 1, 2
Go:

  var foo, bar int = 1, 2
The latter is uglier and harder to read, and doesn't save any appreciable space.

p.s. One saving grace: they could probably add colons back into the type-declaration syntax as an option, without affecting existing programs....


It's less of an issue with var declarations because of type inference. The bigger issue is with function signatures. Type signatures are mandatory there and you also have two levels of commas: func(a, b int, c, d bool). When it comes to syntax I generally prefer flat to nested (Python gets it right) but there is such a thing as too flat. A small saving grace is that exported identifiers in Go, including types, have to be capitalized, making them stand out in signatures.


> exported identifiers in Go, including types, have to be capitalized

Ugh... (notes another Go uglypoint)


To some, explicitly placing keywords like "public", "private", "protected" all throughout your code is ugly. Go gives an explicitly defined coding style; all Go will look roughly similar, and it will never be ambiguous if an identifier is exported or not.


> To some, explicitly placing keywords like "public", "private", "protected" all throughout your code is ugly.

I dunno; maybe it's "ugly," but it's also explicit, and easy to see. Assigning meaning solely to subtle presentational differences can make code hard to read, and increase the likelihood of mistakes (as well as confusing beginners).

[It's a similar problem to Python's significant whitespace.]

Unfortunately it's all this sort of "cute idea" that makes Go seem rather half-baked. It's like they designed the language in a brainstorming session in a bar, mixing ideas from a bunch of people without strong editorial control, and then just released the result without actually understanding the repercussions of many of their decisions. [I'm not saying they're all bad, it just feels like there was a lot more brainstorming than vetting...]


Unfortunately it's all this sort of "cute idea" that makes Go seem rather half-baked. It's like they designed the language in a brainstorming session in a bar, mixing ideas from a bunch of people without strong editorial control, and then just released the result without actually understanding the repercussions of many of their decisions.

This is as far from the truth as I can imagine.

Yes, "subtle presentational differences" can make code hard to read, but in this case it doesn't. When I reference another package I know that all exported variables begin with a capital letter. When I'm writing a package I know whether I'm calling a function that's "private" or "public" without having to look it up.

It's one of some of the Go Authors' favourite ideas in use in Go, and I wish people would stop armchairing about the effect some language feature has without trying it.

The likelihood of mistakes is not really increased, because they will be caught at compile time, or you'll end up with an exported function you didn't realise you wanted.

A confused beginner should be able to understand this concept within seconds. And it's prominently mentioned in the Go tutorials.


> I dunno; maybe it's "ugly," but it's also explicit, and easy to see. Assigning meaning solely to subtle presentational differences can make code hard to read, and increase the likelihood of mistakes (as well as confusing beginners).

I typically disagree when people say, "Well you haven't tried Go yet, so maybe you should," but this is one of the few cases where I agree. The exported/unexported syntax is weird to look at, but once you start using it, you quickly realize it to be amazing.

One of the key things about using a capital letter to indicate export/unexport is that you know if a function or a method is exported at the call site. That is, the export information isn't just in the function declaration, but in the name itself.

And trust me, it is not hard to read. It's not that hard to imagine that your eyes are quickly trained to see the export information.

> [It's a similar problem to Python's significant whitespace.]

No, that's not a problem for anyone who knows how to not mix tabs and spaces.

> and then just released the result without actually understanding the repercussions of many of their decisions

What's ironic is that you're criticizing a feature that they kept because of how it worked in the real world.


I hate Pascal-style type declarations. Using AS3 has given me a great hatred of that colon.


On the "violating time and space" issue, I don't think the problem was on there being both blocking and non-blocking channel reads. Instead the problem for him was that in his mind the channel read "happens first" and how the result from that read is used (i.e. what it is assigned to) shouldn't have any effect on the read anymore. I must say that I agree and would prefer to have a more clearly separate syntax for blocking and non-blocking reads.


Yeah, it's arbitrary that val, ok := <- ch does something different than val := <- ch, but any other convention for differentiating blocking from nonblocking would be just as arbitrary. I think it's something I would get used to after a while.


It's not so much arbitrary as an exception. "The RHS is evaluated and assigned to the LHS, except in this edge case". Using a different syntax would be just as arbitrary, but less of an exception e.g. val, ok := <- ch, val := <~ ch still follows the expected order of evaluation.


It certainly does, but they're both blocking.

http://news.ycombinator.com/item?id=4569456


Thanks for the correction. Back when I first used go, they did different things. The change makes sense, I think-- we already have select for nonblocking I/O.


- On the "damnable use requirement", I see his point. If anything, it means that Go will be better used with IDEs than text editors that'll do this for you automatically;

The lack of partial/incremental/parallel compilation make this a lot less attractive. A lot of IDE technology (e.g., "intellisense," syntax errors, missing imports) are built off of rapid recompilation.

In fact, doing it any other way seems pretty stupid. If your compiler doesn't offer that information then the IDE has to re-implement those codepaths just to provide that information.


Go generally compiles so quickly, incremental compilation is unnecessary.


Actually, incremental compilation is the main reason Go compiles so quickly.

Super-fast incremental compilation is the reason they never bothered with parallel compilation.


Go does incremental compilation just fine.

Nobody has bothered to add parallel compilation because building Go code is already so ridiculously fast.


Fast is relative to purpose. Maybe could afford to be faster in the context of an IDE checking your code on the fly, for a large project. I wouldn't know, since I have not attempted to write it.


actually, the Go tool compiles packages in parallel by default.


...under the assumption that something does not depend on too many cgo packages.


Source? That's not what the article says.


The 8g, 6g, etc compilers operate on a single .go file at a time. You're welcome to invoke them directly if you want.


For information of readers, Python does have a ternary operator. I don't know or care whether it is considered Pythonic, since it is basic Python syntax and seems quite readable to me, e.g.

x = (2 if y > 3 else 4)


> it's good to have blocking and non-blocking channel reads.

That's not really his issue. His issue seems to be the overloading of the channel read to very different operations based on the number of return values.

I don't think he'd have minded if blocking and non-blocking channel reads were different operators, but it's the same operator returning either one or two values. It's harder to read.


We need a preprocessor Come to add stub code for unused cars and imports.


No you don't

import ( _ "fmt" )

Try it.


A couple of remarks. FWIW my job is as a Go programmer.

The unused variable thing (particularly with local variables) has caught many bugs quickly for me. If there's a package that I find myself always adding and removing, it's easy to write a function that freezes it. For instance, for the debugging print example:

    func printf(f string, args ...interface{}) {
        log.Printf(f, args...)
    }
(I tend to use log for debugging prints rather than fmt, because it's guaranteed thread-safe, and it's easier to separate debugging prints from necessary prints at a later stage).

On the few occasions that I've really felt I lacked a ternary operator, I've just coded one up for the occasion:

    func either(cond bool, a, b string) string {
        if cond {
           return a
        }
        return b
    }
No big deal. I do find myself writing little functions as building blocks quite a lot, but I think this works well. Functions are a great building block, and Go is great for assembling blocks.


Question: how do you feel about the lack of exceptions? To me, that's the biggest drawback to Go as it feels like sacrificing a major improvement in reliability and boilerplate reduction (obviously with Python-style syntax, not Java).


I don't agree that exceptions necessarily improve reliability or reduce boilerplate.

But in any case, I've found I usually do better error handling in Go.

Seeing the "multiple-value ... in single-value context" compiler error message immediately tells me that the function I'm calling either doesn't do what I think, or (more usual) that I'm not handling an error return value.

When I fix the compile error I have to consciously decide how the error will be handled. Will I ignore it by assigning it to "_"? Will I pass it to a generic error handling function? Will I do special case handling?

On the other hand, in Python it's really easy to ignore the exceptions that can be thrown by a function. The interpreter doesn't help at all. If I leave out a try/except block then I don't find out about it until runtime, and only then if I get lucky (unlucky?).


How does it sacrifice reliability? Are try/catch blocks not also their own flavor of boilerplate? All error codes do is put the error handling you ought to be doing in the first place near the area an error could occur.

See also Raymond Chen's posts about good vs bad exception handling (H/t Russ Cox on G+):

http://blogs.msdn.com/b/oldnewthing/archive/2004/04/22/11816... http://blogs.msdn.com/b/oldnewthing/archive/2005/01/14/35294...


> The unused variable thing (particularly with local variables) has caught many bugs quickly for me.

I agree, unused variables in any language are great for catching bugs. This isn't an argument for unused as errors instead of warnings.


The `either` function is really crippled compared to a proper ternary operator; it works only for strings, and both `a` and `b` are evaluated at the call site, so side effects in those expressions are out.


cough generics cough

I suppose you could cast it, also use a Function literals for delayed evaluation.


A ternary op, Python's is nice, would be welcome. ;-)

    map[bool]int{false: 3, true: 14}[foo != 0]


I've started using the builtin "println" function for stdout debugging to get around the Damnable Use Requirement. It works without any packages imported, so that the very low-level stuff can tell it's alive without a full stdio library stack in place.


you can also add just after import "fmt" add: var _ = fmt.Println

That way it will never complain that fmt is imported but not used.

Still, I never had need for this. Once you have written a bit of code the import list remains relatively stable.


I don't want to commit temporary debug cruft to version control, so I'd still need to keep adding and removing that whenever I do a commit. Though I guess I could just always skip it with git add --patch.

I do have need for some solution, since I have packages that don't do anything with strings and therefore don't import fmt, but which still get bugs which I need to debug with the stdout.

One more robust approach would be to fit a complete configurable logging system permanently in place.


Missing "var", and a bit ugly, but could be used if it's annoying you :)

http://play.golang.org/p/vVnZWobca9


I use this too. It seems undocumented.


It's documented here, with an admonition not to expect the feature to remain in all future versions: http://golang.org/ref/spec#Bootstrapping


>I used Google's new Go language for two days.

I don't mean to be rude but, please don't write about about things you've only used for two days. This is a massive problem for all new things. Initial impressions mean a lot, and you honestly can't grok a new language in 2 days to effectively blog about it. If everyone here tried to go out and criticize Haskell after two days of usage, I don't even think they'd understand enough to criticize it beyond "it's too complex".

Lots of the decisions made for the Go language aren't for the hobbyist. It's targeting building large, powerful, heavily code-reviewed systems because C++ with an enforced style guide is a nuisance. Let's look at an example decision: the "use requirement" is actually godsend for removing unused dependencies. Go compiles as fast as it does partially because of its powerful dependency resolution. Rob Pike has a story about an engineer accidentally compiling a library 80,000 times in the same C++ build without knowing for years. The author of this blog got mad because it was inconvenient to debug with print. Now if only he hadn't been using the language for two days, he'd know that the Go compilers don't have to, but implement a builtin called print and println specifically for debugging. You don't need to import fmt to debug with print/println.


Bear in mind what he actually did was write notes after 2 days, hold them unpublished for over a year, and then post them with updates and commentary.

Besides, it's ridiculous fish, his notes on two days of Haskell would probably be well worth reading too.


Your point is dealt with, very clearly, in the sentence following! - "This qualifies me to bloviate on it, so here goes."


    sm <- commandData{action: length, result: reply} return (<-reply).(int)
wowza.


Go away.

And anyway, that's two lines masquerading as one. Semicolon insertion won't work and this won't compile.

  sm <- commandData{action: length, result: reply} 
  return (<-reply).(int)


his SafeSet implementation exposes functionality as exported fields instead of exported methods. That should be a red flag to anybody with any real-world Go experience. Exposing functionality through channel fields obviates the possibility of generalizing the functionality as an interface. It also creates subtle traps, where the caller must now know whether they're expected to set these channels to be buffered or unbuffered.

If you're dead-set on implementing this with channels, those channels should be unexported and hidden:

http://play.golang.org/p/zRSco7MpMl

In reality, it's far more sane to just say `type SafeSet struct { d map[string]bool; mu sync.Mutex }`, again exposing all the functionality through methods, and just locking and unlocking the mutex as necessary. If using a mutex makes the implementation more clear and is semantically equivalent, then use a mutex. That's not typically the case, because typically the case involves more than just locking and unlocking. In this particular example, all you need to do is lock some resource, so it's ok to just use a lock.

And the claim that there's no data privacy isn't correct, it just assumes a data privacy model that Go doesn't have, namesly the "public" and "private" notions. Go instead opts for the notion of "exported" and "unexported" fields and methods; i.e., fields and methods that may be used only by code within the same package. It's a different mechanism for hiding data, but it does exist.

Anywho, if you want to see a completely absurd, lock-free (not strictly nonblocking), threadsafe queue in 51 lines of Go using higher order channels, just for kicks, see here: https://gist.github.com/3668150


Instead of

  if expr {
      n = trueVal
  } else {
      n = falseVal
  }
Just do

  n = expr

!


I think you're meant to read "trueVal" as "the value I want n to have if expr is true", not as literally true. And vice versa.


should be easy enough to do a "choose" stmt in golang, stolen from a business lang called Clarion, but expanded to include closures: n = choose(expr,14,choose(expr2,15,some code that returns something))


There's a pattern you should be using here, instead:

"""

Something doesn't work right, so you add a call to fmt.Printf to help debug it. Compile error: "Undefined: fmt." You add an import "fmt" at the top. It works, and you debug the problem. Remove the now annoying log. Compile error: "imported and not used: fmt." Remove the "fmt" knowing full well you're just going to be adding it back again in a few minutes.

"""


i believe he'd be happier with D, which addresses a lot of the issues he had with go, and which can genuinely be used as a better C++.


Many people are not looking for a "better C++", in the land of C++ it seems "better" means "more features", and some of us are not interested in more features, but in a better selection of fewer useful features.

See Rob Pike's essay on this very topic and the design of Go:

http://commandcenter.blogspot.nl/2012/06/less-is-exponential...


The goals of C++ are performance and abstraction and I do not know any language which beats it in both points.

I appreciate "small" languages, which can do a lot with a little core and abstraction mechanisms (Scheme,Smalltalk,...). However, for maximum performance, you need a good compiler, which understands and optimizes your abstractions. In the case of Lisp, whoever writes the macros is also responsible for optimizing them. Unfortunately, this requires application programmers to be also good in writing compilers.

C++ certainly has deficiencies, which cannot be fixed without completely breaking backwards compatibility. Rob Pike probably has them all included in the list you linked to. E.g. irregular syntax and header files.

In my opinion D has mostly fixed those conceptual issues. Unfortunately, D is not mature [0] and development is slow. While D is certainly not the final word in language design, I consider it the only serious competitor to C++.

[0] http://3d.benjamin-thaut.de/?p=20


have you taken a look at D? it cleans up a lot of c++ warts, while not being any more kitchen-sinky.


>Many people are not looking for a "better C++", in the land of C++ it seems "better" means "more features", and some of us are not interested in more features, but in a better selection of fewer useful features.

This is a knee jerk cliche reaction.

D is much better designed than C++.


Or perhaps Rust, which is both simpler than D and has more features in common with Go.


yes, i'm looking forward eagerly to rust, especially because it has algebraic datatypes and pattern matching. not sure how it will stack up to D speedwise, though.


What in particular are you not sure about in terms of performance? With the exception of segmented stacks, Rust adheres to the zero-cost principle just as C++ does.


not any expensive-looking features per se, simply that from looking at new languages, especially the ones with powerful features, speed is the hardest and usually the last thing to be achieved. i think ats is the only exception i've seen to that.


It's very hard to do Test Driven Development in Go because there is no mocking library that can help with stubbing constructor or static method.

I feel that people who jumped ship from dynamic language such as Ruby or Python to Go have no idea what they are giving up. Maybe they never practiced TDD to begin with.

Heck, even Java is more friendly with TDD by using Powermock or JMockit library.


It's very hard to do Test Driven Development in Go because there is no mocking library that can help with stubbing constructor or static method.

Well, you'll be happy to learn that Go doesn't have constructors or static methods, then. :)

What Go does have is a standard unit testing framework which is quite nice: http://golang.org/pkg/testing/

"Mocks" in general are a cumbersome and somewhat lame attempt to deal with the many limitations of inheritance-based designs in languages such as Java. In Go, there's no need to create a Mock, because you can easily create a wrapper object for whatever object you want to test using Go's anonymous member syntax. Then you simply override whatever methods you want to on the wrapper object, and let the others get passed through.


> For small compiles, the Go compiler was blazingly fast; on a large synthetic codebase (700 files), it was three times slower than clang compiling C.

This really stood out for me and seems to make a mockery of Go compile times for larger programs and perhaps more real-world situations.


If you've got a single package with 700 files (I'm pretty sure that's what he was testing), you're doing something wrong.


The author had everything in one package. The package is the real compilation unit. The author claimed that this is the only way to use go because you can't have circular deps. Well, this is where structurally-typed interfaces come in to play -- you shouldn't have a circular dependency, it is just that your interfaces need to be compatible. For instance, each of the parts of the standard library is its own package... there is no circular dep on the "reader" interface, things just implement Read() and it "just works" (and is enforced by the compiler at link time.)

Having 700 files in one package is an abuse of api.


It seems like it's a valid test of the compiler's speed as long as the C tests were done similarly. For real world situation who cares if it takes 300ms to compile instead of 100ms. But why not test with something the compilers have to work at?


I'm not sure if yet another person comparing their set of agreement/disagreement is useful, but here goes, because I like Go and think some people are skeptical about it with a bit too much enthusiasm because it either seems too hip (this or that well known company or that adopting the stuff) or not-hip-enough (the research-PL inclined and even those who absolutely must have parametric data types).

Agreements:

All of the thumbs-up opinions I share, with the emphasis on that Go fills a niche that is otherwise a relative vacuum as-is.

The Damnable Use requirement, for precisely the reason he states. Add having to constantly frob a binding from "anySymbol" to "_" while doing some instrumentation or whatever. This kind of trivia should be fixable by a program, and otherwise let it remain a warning. I think the occassional positioning of this inconvenience as a feature is unconvincing.

Annoyance at the lack of assertions. I think the concern given by the Go FAQ can be met even without making everyone roll their own assertion construct, but now we have either fewer assertions or a less idiomatic way of identifying them. I think an assertion is quite distinct from an error in that it should indicate a logically impossible condition rather than an unusual one, and I often see this guideline applied in the world at large. As such, I feel the FAQ seems flimsy in its justification, and now I have panics littered about that have a convention indicating their special nature.

Assertions are also one of the very few pieces of software engineering practice that have even a smidgen of empirical evidence in their support as well: http://research.microsoft.com/apps/pubs/default.aspx?id=7029...

Leakiness of semicolon insertion. The abstractions are leaky, but I like not having to type my semicolons and it's not a common mistake I make. Minor.

Lack of clarity with regard to unicode. Yes, the language supports it quite well (as he mentions, unsurprising given its pedigree), but I'd say the type system and operators available only seem so-so, and it seems like more so than with Java or Python3 leaky abstractions between bytes and encoded text are present. Among my favorites:

http://golang.org/src/pkg/text/scanner/scanner.go#L521

Wherein the "rune" returned by the "text/scanner" "Scan()" is used to identify the type of token rather than its value. Sometimes. I was mystified by the type signature for a while (why is this returning a rune, and not a lexeme or lex type?) and had to read the package to convince myself that was the intent. Plus side: this kind of abstraction leakage (or at least free-wheeling coercion) can probably yield much faster code.

Disagreements:

"Violation space in time", whereby the semantics of a statement can be varied by the arity of the binding. I think this is good, and not bad. It's only a little weird that it's only available (afaik) as a special service to channel binding rather than any operator defined by a user.

Buffered Channels and Deadlocks: non-deterministic deadlocks are a pain, and I don't think Go provides any real treatment (or exacerbation) of this problem, however, it probably could make that possible by adding some operators to enable writing a run-time deadlock detector, which can be useful for some programs.

Notable omissions, speaking in the postive:

I think go fmt is great. I don't like everything about the formatting, but I like the slavery-is-freedom approach there.

The gdb support is surprisingly good and neatly integrated into most new gdbs one gets off the shelf. Massive thanks to Ian Lance Taylor.


Annoyance at the lack of assertions

In 10 years in the industry, building distributed systems and server-side software, I've never seen assertions work out well. In two of the jobs I was at, we ended up making the assertions always-on, even in production, simply because people were too scared to run with them off. At another job, we simply never used assertions.

I've come to the conclusion that if you think you need more assertions, what you really need is more unit tests. There are other benefits that come from designing for unit tests, like modularity. Assertions are just a crutch, and they're often a sign that you're not thinking things through clearly (if a is always false when b is true, why do I need two booleans? etc.)


I have found assertions very valuable in both Python and C. I have worked with code bases that both have them on in production and have them off in production, and the latter was generally the much more complex and more reliable system (it was also written with great care), but I do not think the on-ness or off-ness of the assertions was the principal cause of that (rather, the level of care/expense in writing the software). The are former were Python programs where the path of least resistance is to leave assertions on (AFAIK), and hitting assertion errors regularly has never been a problem. The latter is a proprietary fork of Postgres, which is peppered with highly useful assertions both by the open source variant and subsequently, keeping with a similar style, the labors of the contributors in the company later.

More unit tests are always nice, but practically, by having a cheaper and lower-overhead way to check invariant buried deep in the code enables more invariants overall. I see unit tests as a nice way to check invariants when it's possible to decouple assertions from the program, but they are generally more expensive to write and that means there will be less of them. Inverting a program to expose private state that has as many easy-to-express detailed invariants as one can think of often not practical.

The two approaches can be combined as well, by simply running with assertions on while also using unit tests.

I think assertions are most useful when evolving complex software to do new things or fix bugs. They're an also good, functional form of a comment. Practically they are much like an error (some performance sensitive systems can turn them off while in-production), so the Go approach to when-in-doubt-leave-it-out is reasonable, but I personally miss them because I still write assertions, they are now just my own idiom and otherwise look much too similar to actual error conditions.

One community I haven't visited in a while has been Java. Is it possible that assertions are more frequently abused there?


I think you are confusing together a few different things. A sanity check that is always on is not an assertion (at least by the usual sense of the word). Nobody is against sanity checks, just like nobody is against motherhood and apple pie. What I disagree with is the idea that sanity checks should be turned off in production, which is the core idea behind assertions.

It seems like your experiences confirm my own. People toy with the idea of turning off sanity checks in production, but eventually reality sets in and they realize that this is a bad idea. Then they keep calling the sanity checks "assertions," because nobody wants to submit a search-and-replace patch for a big code base, and a few die-hards still cling to the dream of running without error checking.

This leads to a lot of confusion whenever I talk about assertions (oh, assertions? You mean those things that are always on?) And so the cycle continues.


> It seems like your experiences confirm my own. People toy with the idea of turning off sanity checks in production, but eventually reality sets in and they realize that this is a bad idea. Then they keep calling the sanity checks "assertions," because nobody wants to submit a search-and-replace patch for a big code base, and a few die-hards still cling to the dream of running without error checking.

No. They don't. Practically nobody runs Postgres or the product we shipped with assertions on (it would be too expensive, and requires recompiling besides), and the only reason people run with assertions on in Python is because it's the easiest way to do things (the default), whereas in most C programs the default is to not have the assertions. However, I note that people don't seem to use assertions as crutch in Python in spite of them being on, because I very seldom see an assertion error crop up in production. When they do, I know the program has gone completely haywire and is no longer internally consistent rather than merely encountering an error it couldn't handle, and report the bug as such. This is very rare; practically, they could be turned off.

However, I get reasonable number of assertion violations when trying to change software while in development or while reading the source, and that's where I find them very useful.

I do reserve a special annoyance for slick assertions that are inadequately explained via comment.


So far we've established that: 1. you run with assertions always on in production, 2. you have encountered errors caught by the always-on assertions in production (it may be "seldom," but then all errors are seldom, hopefully.)

How does this not prove my point? Turning off error checking is stupid. Your software will never be bug-free. Start dealing with reality and rename your assertions to sanity checks, which is what they are and always have been.


> So far we've established that: 1. you run with assertions always on in production, 2. you have encountered errors caught by the always-on assertions in production

No. I gave two examples, one where assertions are run in production (mostly because it's easier) and I'm trying to communicate with you that I don't see people leaning on assertions to find bugs in production code very frequently at all, and this is the fear of the Go FAQ. I gave another example were assertions are invariably not compiled into the program when running in production. Am I not being clear about this?

> (it may be "seldom," but then all errors are seldom, hopefully.)

False. Errors (like "no permission to file", "out of disk", "out of memory") are distinct from 'an internal consistency check has failed.'

These happen all the time for completely legitimate reasons.

> Turning off error checking is stupid. Your software will never be bug-free. Start dealing with reality and rename your assertions to sanity checks, which is what they are and always have been.

Strong words. Okay, here's what I think is stupid:

* Software too slow for the purpose.

* Avoiding writing expensive invariants because it would make the software too slow.

* The notational similarity of a broken invariant that intended to never occur vs. an error condition.

My solution has been to accept the last-most option and to stop writing expensive invariants, even though I wish I could succinctly and idiomatically communicate to all maintainers that the invariant is considered impossible rather than merely an error condition and run with the expensive invariants while in development (a global variable and praying to the branch prediction gods can be close enough).


Needing lots of expensive invariants is almost "invariably" (see what I did there?) a code smell. You often see big, complex classes with 10 different booleans and a ton of assertion crap, when what is really needed is to refactor the class into smaller classes which are unit-testable. In C++, I've even seen people assert that an unsigned integer was >= 0. You can just feel the quality.

The notational similarity of a broken invariant that intended to never occur vs. an error condition.

Go has a notation for problems that are never supposed to happen: panic. For other errors, there are return codes.


> Needing lots of expensive invariants is almost "invariably" (see what I did there?) a code smell.

What's your definition of need? I want a lot of invariants if because I have found it reduces the chance of error. This is also the finding in that Microsoft case study I linked to. The more invariants, the better, and that means making them cheap to write.

> In C++, I've even seen people assert that an unsigned integer was >= 0. You can just feel the quality.

Entirely reasonable in some cases; it's telling me: "this software is not defined for negative numbers". There does exist code that can accept an unsigned number (because it is what callers found most convenient at the time, and casting is wordy) but is defined on negative inputs, and it would not be valid to put that assertion there. Those assertions are to assist future collaborators of your software, and shorter to write than:

    // This algorithm is only defined on positive integers
Until one is using a language with real dependent types, I think your specific example is not on the face of it as egregious as you say it is.

> Go has a notation for problems that are never supposed to happen: panic. For other errors, there are return codes.

Unfortunately I do see panic used -- even in the standard library -- as a flow control mechanism also. But yes, I do use panic for this reason, and tend to eschew using recover for convenience in most situations. It is the lack of an idiom to obtain complete clarity as to the nature of the panic that I find mildly irritating.


Asserting that unsigned numbers are >= 0 is a good idea... if you want to write code that ends up on thedailywtf.com.

I think we're done here.


OK, my response was maybe a little more snarky than I intended. Anyway, I think we are going to have to agree to disagree about the utility of turn off error checking in production.


Always-on assertions are a good thing. They help stop bugs before they become exploitable security vulnerabilities, for example. I would much rather have my web app crash on bad input than proceed to execute a malicious SQL query.


Assertions are a necessity in systems programming, especially when you're talking about OS kernels.

In one software project I know of, assertions are used to ensure that situations that should never happen never do.

This could be as something as harmless as incorrect (but harmless) usage of a device driver interface, or slightly more terrifying cases that could cause data loss if they occurred.

Before you claim that it should have more unit testing, it actually has mountains of it; again, the assertions are a safety net that prevents even worse things from happening without significantly impacting the performance of a production system.


>All of the thumbs-up opinions I share, with the emphasis on that Go fills a niche that is otherwise a relative vacuum as-is.

D, Rust, modern C++, ...


Disagree.

D is but one language (which does not, in and of itself does not fill a vacuum), and I have serious doubts about the social situation regarding its compilers. However, I think it meets most of the other criteria. It also has a lot of features (but not so ridiculously many as C++), however, I'd say it's closer to C++ than C in terms of language size.

Rust is brand new, pre 1.0, and I'm following it with great interest. I do not intend to use it until the low-level error handling has been fleshed out: https://mail.mozilla.org/pipermail/rust-dev/2012-March/00145...

Qualitatively, C++ feels closer to C than Go in many ways in terms of debugging and system aspects, and many C++ programs are written in the "just enough C++" style, which is very nearly C -- and of all possible evils, that is probably the least-evil one in my eyes.

Notably, C++ and D both have have try/catch/finally-style exceptions and inheritance, and while these features can be handy in specific situations, I think both are not desirable. These also apply to Python, a language I also basically like, but I prefer to not have them.


You're kidding right? Nothing stops you from writing any of these languages in a style like Go that doesn't use multiple inheritance and uses error codes. I suspect what's going unspoken is that you prefer others to code like you think instead of how they think. And Go gives you that because, at times above all arguments to the contrary, it strives to have only one way to do something at the language level.


If you have control flow that increases cyclomatic complexity it should be visible (explicit if), not hidden in an expression (ternary operator).


The if/else is no more explicit than the ternary operator.

On the contrary, the ternary operator makes the specific operation (setting ONE variable) more explicit, and ensures you only set one var. Consider:

if a == 1: foo = "PASS" else: foo = "FAIL"

and

foo = (A == 1) ? "PASS" : "FAIL"

Why repeat the foo (and risk an error by mistyping it the second time, especially in a dynamic language or with type inference)?

And they add the same amount of cyclomatic complexity (not that you implied otherwise, just sayin').


The Use requirement is related to Go's refusal to have compiler warnings. It is a well-intentioned step in the wrong direction, overtaking even Bondage and Discipline languages. Haskell's GHC just addded a flag to make type errors warnings!

The closures capturing references thing is a big design bug. If a programmer wants a to share a reference, he has the & operator.


I don't think it's a design bug. Personally, I would prefer if for each iteration of the for loop, the variables were fresh, so that even if they were captured by reference, they would be different references.

So, you would be able to do:

  l = [];
  for i = 1 .. 10 {
    f = () -> { i += 1; }
    g = () -> { return i; }
    l.append({adder: f, getter: g});
  }
This would also play nice with loop unrolling. If the programmer wants a variable that is shared between the iteration of the for loop, the declaration should be outside the loop.


Yeah, the use requirement is annoying. It would really be nice if there were a way to make that check optional.

As far as closures capturing references-- if closures captured by value rather than by reference, how would maps and slices be handled? Anywhere the closure used one of those items, it would have to be deep-copied-- a potentially slow operation. Even deep-copying structures could get expensive. The worst part would be that this cost would be hidden to the programmer, who would wonder why his code got so slow all the sudden. I think having them capture references is the only reasonable design given the other design choices.


Copy on write, maybe?


By the way, it's not true that Golang strings are always UTF-8. Golang source code is always UTF-8, but strings can contain arbitrary bytes. Trying to force all strings to be UTF-8 can lead to fiascos in the real world, where legacy encodings still do exist. (Although you SHOULD NOT add to the problem by creating new systems that use them!)

http://golang.org/pkg/unicode/ lists a bug at the bottom: "There is no mechanism for full case folding, that is, for characters that involve multiple runes in the input or output." So basically, they are aware of the bug / lack of feature and are working on it.


Full case folding is already in tip and will almost certainly be part of Go 1.1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: