I think one of the nicest things about this are its origins in CSP. By using channel communication for synchronization between threads you eliminate all the hassle of worrying about mutexes and shared memory.
And they have a nice choice operator called select which allows the language to read from multiple channels at the same time and make a random choice about competing channels to send to. See this example:
var c, c1, c2 chan int;
var i1, i2 int;
select {
case i1 = <-c1:
print("received ", i1, " from c1\n");
case c2 <- i2:
print("sent ", i2, " to c2\n");
default:
print("no communication\n");
}
for { // send random sequence of bits to c
select {
case c <- 0: // note: no statement, no fallthrough, no folding of cases
case c <- 1:
}
}
The style corresponds to the CSP choice operator where the bottom select is doing something like
P = ( 0 -> P ) OR ( 1 -> P )
OR there is the CSP non-deterministic choice operator. Either branch can go ahead.
CSP doesn't make a distinction between sending and receiving on a channel (it doesn't have channels, merely the parallel, or interface parallel, operator to combine processes). Essentially sending is offering to synchronize on a single value and receiving is accepting to synchronize on any value: run those in parallel you get a communication channel.
I feel like I might be the only person in the world who isn't excited about this language.
I don't like the verbose c-style syntax. The examples that I've seen don't have the clarity that I've come to expect from functional programming languages like haskell.
To me, the language feels like a c/c++ that has better language features for dealing with shared state and concurrency. That's nice, but it's just not enough to get me stoked about learning it.
I agree with your assessment and share it myself. What gets me is the "I'll keep an open mind" type of comments I see attached to criticism of the language.
I don't mean that as a dig against you personally, but what I mean is that a lot of people who admit they don't like the language still feel obligated to add a similar phrase to their statements - which I think is entirely because this came out of Google and has a couple classic rockstars behind it. If it were introduced by an unknown research student or hobbyist, it'd have been dismissed and declared dead from the very start and that'd have been the end of it.
(I admit that I also was influenced by this. I saw the initial specs, some examples, and immediately thought, "this seems... pointless." But I dug in deeper because of the reputations involved. My opinion hasn't really changed, but I gave it a lot more attention than I would have otherwise because of these factors.)
The reason this bugs me is that there's a lot of interesting research out there in the languages world. It shows up on HN pretty frequently, but it never really spills beyond places like here because people have no trouble dismissing or ignoring new ideas immediately when they come from "strangers." I guess I don't have a big point here, just that it's unfortunate that this is likely going to result in a huge uptake of Go when other viable systems language alternatives have existed for, in some cases, decades but seem doomed to be ignored just because there wasn't a big name attached.
What gets me is the "I'll keep an open mind" type of comments
I think the 'open mind' is indeed the correct attitude here. If a very experienced programmer with stellar credentials comes along and says 'this is a good solution to the problems I've faced in my career', I'm inclined to listen very patiently even if my first impressions are negative.
This doesn't mean that you are wrong about the syntax, but perhaps there is something useful below the surface that's worth digging to discover. Maybe the right solution will be blend of a modern bracket-free syntax and the guts of something like Go.
For me, the syntax isn't really that important. Probably this is just due to my brain damage from coping with C.
I think that after they add some sort of generics, ie. parametric polymorphism, the language is semantically quite close to a modern ML-style language like OCaml, but with a syntax familiar to C hackers and a native support for concurrency.
When I read through the documentation, the overwhelming impression I got is that this is a replacement for C in systems programming. It has the advantages of C, but with a lot of warts cleaned up, nice concurrency support, much safer pointers and arrays, and a lot of general pleasantness. I've been doing a lot of programming in C, not by my own choosing, and I would love to be able to do it in Go instead, because Go addressed the most painful aspects of C without losing the efficiency or the simplicity.
Yes, I prefer Python and Lisp and sometimes even weird stuff like Haskell for most programming. But Go has a niche, and in that niche it is a breath of fresh air.
I pretty much thought the same thing. Then I came to the conclusion that if it succeeds in its performance goals (10-20% slower than C), it has a good chance of being the language of choice for those hackers that now use C for everything, even when using C means that some memory leak or buffer overflow is likely to going to cause a security flaw in the future. If things like web- and mail servers were implemented in a Go instead of C, many security related bugs in those applications might go away.
Rob Pike's presentation says something quite different. It says that gccgo "allocates one goroutine per thread" and that 6g has "good goroutine support, muxes them onto threads".
The language FAQ says:
"Goroutines are part of making concurrency easy to use. The idea, which has been around for a while, is to multiplex independently executing functions—coroutines, really—onto a set of threads. When a coroutine blocks, such as by calling a blocking system call, the run-time automatically moves other coroutines on the same operating system thread to a different, runnable thread so they won't be blocked. The programmer sees none of this, which is the point. The result, which we call goroutines, can be very cheap: unless they spend a lot of time in long-running system calls, they cost little more than the memory for the stack. "
The first is a random correction to a common meme. Channels can be created buffered or unbuffered. If they are unbuffered, then sender blocks. If buffereed, then sender blocks when the buffer is full. Buffering can improve performance significantly.
Secondly when you set up complicated messes of goroutines talking over channels it is easy to get deadlocks. There is a deadlock detection mechanism, but I don't have any details beyond knowing that some people have run into it.
Thirdly am I the only person in the world who looks at the channel mechanism and thinks how naturally it maps onto a capability style security system? I've pointed that out a few times and nobody seems to bite on it. Odd.
"Thirdly am I the only person in the world who looks at the channel mechanism and thinks how naturally it maps onto a capability style security system?"
I'm thinking that "people interested and excited about capabilities" - "people who didn't realize channels can be used that way" is probably a fairly small number. I saw one of your earlier comments to that effect, but it wasn't very surprising to me, for instance.
I realized that they could be used that way immediately. My excitement is over the potential for the approach becoming more mainstream, and not over being surprised that it was possible.
>> "Why a whole post about this? Because, these days, we're all using multi-core computers. For the most part, the way that we're making things faster isn't really by making them faster: we've been running into some physical barriers that make it hard to keep pushing raw CPU speed. Instead, we've been adding CPUs: my laptop has 2 processors, and my server has 8 processors."
I don't know about this... Laptops have had 2 cores for quite a while. Are we really going to see 8 core laptops? I'd bet the other way - more CPU intensive work will be done on servers, and laptops/desktops will end up as low powered thin clients to the web.
Actually, with i7's hyperthreading (which is in iMacs now and a few laptops) common desktops effectively have 8 "cores" to play with. There are 4 hardware, and 4 virtual, it still shows up as 8 completely usable cores in activity monitors and such.
While it is quite true that some applications will be configured as thin-client, that approach requires mostly-ubiquitous communications and with sufficient bandwidth. Whether multi-core or GPU-based or otherwise, there's simply no panacea here.
And if for no reason beyond ego allowing sales of products at higher margins, yes, there will be eight-core laptops. And folks will then find uses for more cores ubiquitous, just like folks now need much more than 640K of memory in their computer.
My bet would be the exact opposite. There will be a push to get more power on the computers we have on our desks and in our pockets, and the only way to increase that is to add more cores.
Once there is a good solution to using lots of cores we might even see a reversal of the gigahertz races, where more slower cores are cheaper than a few faster ones.
I'm a bit confused: I see several threads running aparent infite loops generating numbers. It doesn't seem to be lazily evaluated (or is it?). Is this going to run until it runs out of memory (or thread handles)?
When it runs, it is going to spawn a thread generating a sequence of numbers, then another one taking that sequence, start generating a new sequence based on that skipping multiples of a particular number, then start another one doing the same. Number of threads seems to always go up. Is that the right analysis of the code? Doesn't seem right to me.
That blew my mind for a bit too. It's not lazy in the sense of postponing evaluation, it just blocks until something reads or writes from the channel. This could go on forever assuming you have something that reads from the channel in an infinite loop (as it looks like it's doing in main(), actually...), but the generators themselves only loop when the channel is interacted with.
There's still the ever-increasing number of concurrent goroutines, but what just helped me accept that a bit is the fact is that each goroutine does not necessarily map to one thread (it actually does in one of their compilers - there's two - but that's an implementation detail).
Yes, it's lazy. The sieve outputs on a channel, which blocks until you read it. It will only generate more numbers as they're consumed. Same thing with the filters.
At first I was kinda psyched about GO (And SPDY), but after looking at it...
If computers can read shit like Brainfuck (And therefor any damn syntax we come up with), why the hell do so many programming languages /choose/ to look like shit? I mean Ruby does a pretty good job of avoiding this, but looking at things like GO and Java I wonder what the hell went wrong.
And they have a nice choice operator called select which allows the language to read from multiple channels at the same time and make a random choice about competing channels to send to. See this example:
The style corresponds to the CSP choice operator where the bottom select is doing something like OR there is the CSP non-deterministic choice operator. Either branch can go ahead.CSP doesn't make a distinction between sending and receiving on a channel (it doesn't have channels, merely the parallel, or interface parallel, operator to combine processes). Essentially sending is offering to synchronize on a single value and receiving is accepting to synchronize on any value: run those in parallel you get a communication channel.