Go Lang was a huge win for us in our server infrastructure. All high performant but hard to maintain C stuff got migrated to Go in 6 months and we never looked back. Code is much more managable, less debugging for memory management, much more readable and shorter by 60%. All that at negligable performance penalties.
For us its a freaking story of awesome success. Go = ROCKS!
I didn't read that he was claiming this was exclusive to Go. I prefer we not require a dissertation defending language choice each and every time we have a short anecdote about our experience with it.
Now if he said they had no such luck w/ Java/D/Rust, then perhaps I could understand your ultra defensive fanboi position.
But since the post was on topic of Go and we do have quite large Go infrastructure I just wanted to comment that Go is awesome. More people should know about the awesomeness and perhaps experience the same as we did.
Did not downvote them myself, but because of the seemingly kneejerk cynicism, I would imagine. OP is simply praising Go and sharing a success story - there's no need to jump in and point out (with a somewhat condescending tone) that other languages offer similar benefits as Go. While (maybe) true, it doesn't exactly take away from Go's accomplishments or how it's helped OP.
It's kind of like if I excitedly told my friend I'd run an eight minute mile, and she replied, "oh, so what? Lots of people can do that". Technically true, but unnecessary.
I believe that these days you need some secret threshold of karma (HN internet points) to be able to downvote. Now that you mention it, I have indeed seen way fewer power displays of the infamous "downvote mafia" recently, so I guess it works.
Probably, but also probably not on the same level. This doesn't seem like a very useful objection, both because it's obvious and because it's very weak.
Probably true. But it would take us longer, code base would be larger and still harder to maintain and more error prone compared to Go.
It's a very old project and super important core technology - we did refactors before and I was somewhat scepitcal when lead devs on the project suggested to use Go instead for the next "refactor". I was blown away by the quality, speed and productivity.
If you're building server infrastructure that needs to be performant, consider Go.
People have been poopoo-ing Go lately on HN but let me tell you after using it in production for over a year. It really makes developers life easier. Why?
1) Error handling, Give you ability to impose defensive practices making network or another other I/O failures or unexpected responses from external or internal calls easy to handle.
2) A lot of errors can be caught at compile time. Which means less errors in runtime.
3) It so simple that all of the build in functions and syntax can be expressed in less than 16 lines. Makes reading your code and anyone else code very very easy. Also FMT is sick.
4) Dead simple concurrency.
But all this has still not helped me push Go as the goto lang at the company I work for, why? Sometimes you work with people that just don't want any change.
Give Go a try you'll be shocked how well its designed.
Go is often judged (and praised) based on it's concurrency. But I use map/filter/reduce, etc. far more often than I use concurrency. So no matter how great the concurrency story is, having to use for loops all over my code makes my life much worse.
This was actually fairly heavily discussed on the Golang mailing list a few months ago[0]. For now I believe the designers of the language would rather keep things as simple as possible. For something that can be done with a fairly simple for-loop, they would rather have you write that out. If there is certain functionality that requires a complex amount of code, the would consider adding either a package to help or changing the language if needed.
Loops make your life worse, really? Loops are trivial code that are nearly impossible to screw up. Writing a loop and writing map/filter probably take nearly the same time. If this is the biggest problem you have in coding, you're luckier than most.
How do you mean exactly? Go has an iterable for map, slices, arrays, strings, channels with the "range"[0] keyword. The values returned by range are copies of whatever is in the structure so they should be immune to mutation (unless you're messing with pointers).
He means that what you actually want to do is modify some state outside of the loop when you're iterating, which means that your loop doesn't provide a "safe" context where you know what to expect.
Are you sure you're modifying the correct outside variable ? Are you sure you really are modifying it:
bar[i] = f(foo[i])
and not creating a new one:
bar[i] := f(foo[i])
? Are you sure you're not also modifying another variable that you didn't know was "linked" somehow ?
Given the direction Go has taken, it would make very little sense to introduce functional programming bits. But you must admit that the immutability/lack of side effects of the FP gives you a lot of assurance on what to expect.
I think you need to provide an example of what you're trying to say, as the syntax you showed above is not valid. the := operator is for defining a new variable, not for creating a new instance of some type. The := operator is just shorthand.
foo := 5
is shorthand for:
var foo int
foo = 5
the "range" keyword over an array gives you the index and the value at that index. If you don't need the index or the value, you can put a "_" in it's place (to save on the allocation).
You'll notice that foo[i] is still printing "0", because range always creates a copy of the object assigned into v. Unless you start using pointers, you'll never have mutation issues.
If composability, avoiding mutation, and eliminating loops are the focus of what you want from a language, then Go is definitely not trying to solve your problems.
But maybe, rather than just carping, you should wait until you really understand why Go is trying to solve the problems that it is actually trying to solve. Because I don't think you do understand that yet.
Go is trying to solve the kinds of problems you run into when you have a multi-million-line code base that you have to maintain over two or three decades. At that scale, you get new problems - not just more of the same old programming-in-the-small problems.
I'll give you this much, though: Loops are easier to screw up than the grandparent seems to recognize.
Mutation is a problem that every program must deal with. Map/filter/reduce and company are just a good way to isolate mutation. Your entire application doesn't need be functional to benefit from some functional behavior.
Likewise, composability improves readability and isolates code into small parts that can be assembled. This is beneficial to large applications as well.
You seem to indicate that concurrency is the only problem worth investing in with large applications but they are not mutational exclusive. Erlang does a good job of both, for example.
I never said that concurrency was the only problem with large applications. I never even meant to imply it.
For example, one of the big problems is circular dependencies. Go solves this by explicitly prohibiting it - your dependencies must form a directed acyclic graph. You therefore have to work out a solution the moment you would want to introduce a circular dependency. You can't introduce even the first one. That means that you can never build up the kind of nightmarish hairball that large projects often turn into.
Is it painful to have to prevent circular dependencies all along the line? Certainly. But it prevents more pain later.
Now: Do I know that functional programming, map/filter/reduce, and avoiding mutation cause problems at scale? No, I don't. On the other hand, I doubt many projects have been built using FP at this scale, so one can't assert that FP will scale this far.
What are some other things? Having no circular dependencies doesn't seem very novel or something that requires much of a tradeoff. Although I will admit I am more of a fan of it after having used it (F#, which requires tight opt-in to mutual references), than before having used it (at which point I would have said "just don't do that").
HN's reaction to Go reminds me of Stroustrup quote about C++:
..."There are only two kinds of languages: the ones people complain about and the ones nobody uses"
So much complaining about how Go lacks this and that features of shiny new (and sometimes unfinished) language X. In the meantime, as time goes by it seems that more and more real companies use Go code in real production and are quite happy with it, despite all the shortcomings frequently listed here.
It seems like Go is reasonably well designed given its creators' goals, and many people like it, which is great. But I think you're being unfair to (some) critics. It's not that Go is missing "shiny new" features - Go is missing features that have been established for a very long time; generics and sum types have been around for decades and have been proven to work.
And to your last sentence, use in production is not a good measure of language quality (except in a tautological sense), because many non-technical factors strongly affect popularity.
Yeah, but the point of Go is to be minimal and easy. Basically, this makes it really hard to be clever. In both senses, unfortunately, but I see how it can be a net win for big projects.
I think that's perfectly fine, though it's not my cup of tea (and I doubt that empirical evidence would show that Go's particular set of features maximizes "big project" maintainability, though such an experiment is sadly impossible to arrange). And I think that the point, "Critics focus on missing features, but Go's philosophy is to be minimal" is definitely valid. But, "Critics focus on how Go is missing shiny new features (that may not even be fully implemented)" is definitely not true (of most critics). Most people aren't complaining that Go doesn't support dependent types, or row polymorphism, or cutting edge technologies. They're complaining that Go doesn't even reflect the state-of-the-art of 30 years ago.
They're complaining that Go doesn't even reflect the state-of-the-art of 30 years ago.
Clearly that was a very deliberate choice on the part of the Go creators, not the result of oversight or ignorance. I suspect they simply didn't want to reflect the state of the art 30 years ago, 20 years ago, or today. I don't find that at all surprising, because people who want to get things done often like very simple tools which stay out of the way. To each their own though, and I'm sure the go creators would be very happy to see other languages used instead of go by those who prefer more features, more complexity, state of the art from 30 years ago, etc. I do find myself puzzled by the hostility it generates though - it is just a language, one of many, and about which some very ordinary claims are being made (easy to learn etc).
There are a ton of anti features that have been introduced in the last 30 years. Things that are clever but make your code harder to understand and harder to maintain. Many of these are things people like, but have proven to be somewhere between not useful and a nightmare.
I'm not advocating adding every feature introduced in that span, only the ones that are not problematic. As a specific example, what's the downside to disjoint unions? They add minimal complexity while adding a ton of safety. I'm not aware of any argument that they are either not useful or a nightmare - could you point me to one?
So, by disjoint unions, I'm going to assume you mean sum types, like a value that can be either a pointer or an error, but never both, and you use special keywords to access one or the other.
Sure, that's really useful - you guarantee that you can never access the unset half of the value. But it adds a lot of complexity, too. You need a way to declare values of this type - this means more keywords and/or more special syntax. You need a way to then access both halves of the values, so now you have to add pattern matching... that's actually a lot of complexity to add to a language that only has 25 keywords.
All that, instead of just doing what Go already does, which is to return two values and just have a convention of checking the error before the value... which works really well 99% of the time, and doesn't require all the rest of that complexity.
The problem Go is trying to solve is that of programming at scale (say, a ten-million-line code base that will survive for two decades). The problem it is trying to solve is not how to make it easier to write very small parts of the program. These two are not the same problem. (Problems of scale are not the same problems but more of them. They are different problems.)
Does the functional style of maps and composition cause problems at scale? I don't have any data. However, I would not begin by assuming that Go's designers were either stupid or ignorant of functional programming idioms.
It sounds like you're trying to stuff too much in a single line. Line returns aren't in short supply. You can just write the loop and then write another loop. It'll be more clear that way anyway. And if you really do this a lot, write a nicely named helper function. That will also make your code more clear.
It's not clear; you suddenly have to read 5 times more code, your code no longer makes intent clear. You now have to understand the intent of multiple loops and make sure you're not introducing subtle bugs.
I cannot see a single way in which that is objectively "better".
More code does not mean less clear. I've seen some really hairy list comprehensions that I broke out into 3 loops and made them 1000x more clear, because you could actually follow what each conversion was doing.
You'll have to explain to me what you mean by composition if you don't mean foo(bar(baz())) or foo().bar().baz() Composition means putting things together, if you're not doing it on one line, then what's wrong with a loop?
For example, Iterators are composable. I can return one from a function, append another iterator to the chain, return that and then iterate over the whole result (possibly, this leads to only one iteration internally).
Type parameterization has been around for a long time, but all existing implementations make tradeoffs the Go team is not willing to make: http://research.swtch.com/generic
If there exists an implementation of type parameterization that imposes neither a compile time cost nor a run time cost, I would be interested in reading about it.
The languages and compilers I'm familiar with (Scala, Haskell, MLton) all pay a cost, usually in compile time.
The answer, as I understand, is that .NET has to copy code for each non-reference-type instantiation. .NET gets away with it because most types are reference typed and can use the same code.
In F#, you can use hat types via inlining to get even more power. Like creating a map function that works directly on List, Array, and Set - but without using any common interface. Pretty neat, but it emits the function's IL into every callsite so it can get out of hand.
I recall there being a mailing list thread on Go where the designers addressed this. I've no idea how Go is implemented so perhaps the multiple-copies problem is actually significant.
Technically, I believe that value types with the same layout could share implementations (though I don't believe this is implemented). In any case, if you don't have generics you have to do the same thing by hand anyway (e.g. create an IntList type, a DoubleList type, etc.), so I can't see how that's any better. More likely, I think, is that Go doesn't do JIT compilation, so there would be some challenges getting a comparable implementation built on Go's runtime. But .NET's new native toolchain solves the same problem, so it's clearly not insurmountable.
In cases static linking is used, which happens to be Go's case, this can be done at link time.
Also addressed by Ada generics for example. There are many others.
Strong typed languages with generics go back to CLU (1974), so there are lots of research material, as well as, real languages if one steps outside Java/C++ implementation mindset.
The Go designers clearly had full awareness of every language feature available in alternatives. They consciously and intentionally made the choices they did. This is obvious, of course, but often it is stated as if they came out of a cave, language in hand, unaware of all the great things they were missing.
And to your last sentence, use in production is not a good measure of language quality
It is actually a great indicator of the gap between abstract language tourism, and practical day to day development.
I will say again that Haskell is held as the perfect language and set of choices on here constantly (particularly when used to denigrate Go). Yet it is used for perilously few actual solutions (despite being around for decades).
Sometimes the things that seem incredibly important and of great value just aren't such a great value in the real world. Similarly, things that seem minor end up being very important.
Perhaps we're talking past each other - I'm certainly not saying Go should be Haskell (I've never even written a Haskell program!). Go's designers are clearly not stupid, and many people seem quite satisfied with their design choices. But that doesn't mean they "had full awareness of every language feature available in the alternatives", and even if they did, smart people are still susceptible to the blub paradox.
As for use in production, I agree that it can be a fair barometer for "utility", but I strongly disagree that it's a good measure of "quality" (as in, technical design choices). We live in a path-dependent world rife with network effects, so quality per se just doesn't matter all that much, and non-technical factors like "being backed by a major corporation" can matter a lot. I'm sure there's more Visual Basic in the wild than Go, but I'd hardly use that to argue that it's a superior language, or that Go proponents are "abstract language tourists".
>But that doesn't mean they "had full awareness of every language feature available in the alternatives", and even if they did, smart people are still susceptible to the blub paradox.
I think the blub paradox applies to everyone. Unless you spend time using a feature in earnest, it is easy to convince yourself that it's not valuable. Do you have reason to disagree?
no offense, man, yet this your statement is an example of the blub paradox.
"But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub ..."
it is like you looking up at Thompson and not realizing that you're looking up. You probably consider Thompson equivalent to you and everyone. (Again, no offense intended, though i can see how it can sound kind of offensive).
On the other hand, unless you spend a lot of time using a feature in earnest, it's hard to know that the feature is actually not necessary because it's an enormous workaround for something else.
> Suggesting application of Blub paradox to the people who invented Unix ...
Who mean the same people that disregarded memory safe system programming languages and decided to create their own "unsafe by default" one?
The same people that created a text based operating system, while at Xerox PARC GUI based workstations were being developed in memory safe system programming languages?
I was thinking about how much space they saved in their C strings by having a null byte instead of the Pascal style pointer and length struct ... on their original 16 bit systems, it was one. One byte! So much pain was caused by that decision.
more than 20 years ago i did write in PL/I some bits, and i did write in Algol on a system project (embedded OS and dev tools for it). Man, it is ugly.
it sounds a bit oxymoronic, at least on practice. I'm yet to see a normally functioning system written using "safe system programming language".
>The same people that created a text based operating system, while at Xerox PARC GUI based workstations were being developed in memory safe system programming languages?
i hope you don't mean Unix here because Unix has nothing to do with either "text based" or "GUI based" - it is completely orthogonal to that.
> But that doesn't mean they "had full awareness of every language feature available in the alternatives", and even if they did, smart people are still susceptible to the blub paradox.
I'm pretty sure Ken Thompson &co were fully aware of generics and sum types (to use your own example). It trades the implicit complication of implementing and using those (and other) features for strong concurrency. For a lot of people, that's a good tradeoff to make.
There are also languages that are frequently used, infrequently complained about, and reasonably well designed (like C). I think Stroustrup has an emotional interest in thinking otherwise.
I'm not sure it's reasonable to say that C is infrequently complained about. I think it's more likely that we've just internalized the complaints about C, and that they now go without saying. (C is my favorite language, for whatever it's worth).
C is infrequently complained about? Seriously? The thing with C is, it's been around for >40 years, so all complains have already been rehashed to oblivion. Dozens of "C replacement" languages developed since then stand as a witness to this.
I find some philosophy of C and Go similar, BTW. Minimalism is an art.
Go is only superficially minimalist. Green threads and other threading primitives, built-in non-trivial data structures, inherent heap usage and garbage collection, etc.
Good thing production software deployment and maintenance has a much smaller focus on computer science theory than real world use cases.
If all business decisions were based on pure computer science we might very well have much better software, but im inclined to believe we wouldn't have so many different and great products out in the world.
These are all considerations that should really be part of computer science in the first place. There's too much focus on the "Spherical Cow" part of writing software and not enough/any focus on how that software actually behaves at a modern scale/style of deployment. (At least in the curriculum I remember).
I'll take a swing. These aren't 'destroy computer science' worthy, but they are a MASSIVE step back in the development of languages.
:.:.:
Go lacks a lot of basic functionality a lot of other languages take for granted. I.E.:
In Rust you can type
fn id<T> ( item:T) -> T
{
return item;
}
And you assign the type to the function. This is possible in Go by double casting the type. Which is really messy, and what java did in 2004 for generic functions (and was ultimately useless).
:.:.:
Lets say you want to iterate over a array in say java.
for(byte a: bytes[]
{
x += a;
}
This also doesn't exist in Go. Yet does in python, java, and several others thanks to iterators.
:.:.:
Lets say I'm adding vectors and dark magic has told me to use '-' for the dot product. In most languages (Rust, C, C++, Haskell) this is completely possible, as - is just a function. __subtract__() in rust.
You can't in Go.
:.:.:
Now lets talk about pointers. When you want to signify data doesn't exist in Go you pass a null pointer (nil, 0x0). This is wrong on almost every single level. Your not only creating a back door, but your by passing your entire type system.
The only thing that stops this from being common practice is that Go has such good message handling, but it still exists! Why?!
:.:.:
Immutable variables, data structures, etc. are a thing of beauty. In Rust and Haskell all values are immutable by default, but most languages (C, C++, Java) let you delcare variables, data structures, etc. as immutable.... Go doesn't.
:.:.:
Go has no support for turning off compiler safety features. Much like Rust offers the
unsafe{ ... }
tag for isolating code that may do strange things, go doesn't.
Not sure why you think you can't iterate over an array in go:
for _, a := bytes {
x += a
}
No there's no generics. I rarely need generics. Usually I need a specific, and waste time making a genetic.
No there's no operator overloading, and thank god. + is numeric add, I never have to wonder if + might be some crazy O(n^2) function. Yes, that makes go bad for Matt and science code. That's ok, go is not for every application.
Go has unsafe and reflection, it is just discouraged strongly.
Yes, nil exists. Big deal. You know how often I've seen nil pine exceptions in a year of writing go full time? Once. Because go has multiple returns and you never look at the pointer before checking the error.
No there's no immutable data structures.... And I've never missed them. You can approximate them with interfaces with only getter methods, but it's rarely worth the effort.
Now instead of typing the same thing over and over and over again I call a function (literally what they're designed for), and it works for floats, doubles, AND ints.
>no operator overloading, and thank god. + is numeric add, I never have to wonder if + might be some crazy O(n^2) function.
Overloading is normally done a per-data type basis to avoid exactly what you state. But yes, it is a big mathematical/scientific point. It can really simplify your life working with some data structures.
>Yes, nil exists. Big deal.
No that's the problem. Its existence is a bug. And has no use existing, as you said yourself its rare, you never see it, you hardly use it... So why even include it?!
>[...] immutable
Immutability is more of a programmer hack, not a language hack. Rust and Haskell make me ask myself 'will I actually change this value' when declaring a new variable. And addes an extra mental step and self check of your own code when programming.
He said he rarely sees code fail because of nil dereferences, not that he rarely sees it. "if err != nil {" is probably the most commonly written line of Go code by a large margin. Nil fits well with the language's zero defaultness of values. It may not be theoretically sound, but in practice it fits within Go very well and rarely causes the problems it tends to in other languages with null.
I don't think your criticisms of Go are all wrong, but I do think they miss the point of Go. Go is designed to be a "best hits" sort of language. It doesn't try to do anything fancy or new (though several of its features are taken from lesser known languages) it simply tries to do what currently popular languages do, but much better. It will fail many CS theory benchmarks, but it does so in the aim of being productive now.
To say Go doesn't do anything new is wrong however. The language specification doesn't include new material, sure, but the language tools are incredibly engineered. Go get just works (but don't use it for long term dependency management, that's not what it's designed for.) The documentation is stellar, and it's really easy to make good documentation. The compile times are lightning fast. Go fmt is by far my favorite feature; All Go code looks the same, and it even saves time trying to format code to look nice.
As for your specific complains:
:.:.:
No, Go doesn't have generics, but for most cases that's ok. The saying goes when you have a hammer, everything looks like a nail. However Go does give you a screwdriver (interfaces), and it turns out a lot of things work well as screws. Generics aren't easy, they complicate the language (which goes against one of their language goals), and increase compile time or run time (which go against other goals.) They haven't said generics will never be in the language, just that they haven't found a solution they like and that it won't be in the language any time soon.
:.:.:
You can iterate in Go. Otherwise for loops are a tried and true method.
:.:.:
This complicates some code, but makes the language a lot simpler. Yes it's a trade off, and the language author's choices may differ from your preferences.
:.:.:
Yuck, nil pointers, I agree. However sometimes an ugly feature makes the whole language look a lot better. Nil usage aligns very well with the design of the rest of the language. Gross? Yes. The wrong design decision for Go? Definitely not.
:.:.:
Immutable values complicate the type system and require that those features occupy some brain power. They may be nice in other languages, they have no place in Go.
I think the bit on managing dependencies could be improved with a bit more information about what other people have done. For example, SoundCloud maintains one monolithic $GOPATH in a single git repository that they can then selectively update. My personal preference to any flavor of the month Go dependency managers is gopkg.in[0], because it doesn't fool with the official tool chain (i.e. "go get", "go build"), but still allows for the versioning of packages.
> For example, SoundCloud maintains one monolithic $GOPATH
> in a single git repository that they can then
> selectively update.
No, we tell developers to use a single $GOPATH on their machine, and check out/edit code directly in the canonical location, e.g. $GOPATH/src/github.com/soundcloud/foosvc
This is what source control is for. You switch to the branch and hack on it. In the rare case you need different versions of dependencies, you use a tool like godep (but if you have to resort to that, you're doing something wrong, your dependencies should have stable APIs).
Don't use multiple spots on your hard drive for code in source control. Use source control for what it's made for.
> This is what source control is for. You switch to the
> branch and hack on it.
We're also running a microservice (SOA) architecture; each repo represents one service, with one (hopefully) tightly-constrained purpose. It's very rare that we're doing more than 1 or 2 feature branches in a repo at one time.
Our GOPATH per project is `pwd`:`pwd`/vendor and in vendor we use a git subtree to a shared repo where we have forked our external dependencies. It isn't great because merging in updates from upstream is annoying, but it is simple and isn't something we have to think about 99% of the time.
Thank you for this clarification. I'm fairly certain I was under that impression due to a misinterpreted description of this system. This does support my point, though, that more people need to discuss this topic. Instead of "generics", "versioning" needs to be the most popular complaint with the language, because it's something that can be standardized in the community.
The problem with Go is that it wasn't designed to implement any large projects like C was. So the designers would typically make trade offs on perceived beauty and elegance rather than utility and potential pain in the real world. Another side effect of this is optimizing for compiler simplicity rather than optimizing for expressiveness of the language. It's easy to forget we build compilers not for compiler's sake but for its target user. Here's the quote from Go's history:
we started off with the idea that all three of us had to be talked into every feature in the language, so there was no extraneous garbage put into the language for any reason.
I would also argue C++ started in similar fashion but it went exact opposite route. The balance between minimalism and feature creep can only be achieved if you are designing language as a side effect of building your own large scale project combined with a sense of urgency.
Go is currently a lucky language. There is a huge vacuum that Java has left after Oracle acquisition plus its prolonged stagnation. People who need compiled languages for their relatively new code base, Go would end up becoming their choice regardless of its shortcomings. People will enjoy its minimalism until code base grows to be monster and features starts becoming sorely missed. This is nothing unusual in world of programming languages. We have seen COBOL become a gold standard at one point and PHP is still pretty hot.
"The problem with Go is that it wasn't designed to implement any large projects"
What? Uh no. That's exactly what it was designed for. Big projects at Google. It has been stated by the creators of Go multiple times.
The simplicity of Go helps in many ways that are not immediately obvious. You always know how memory is laid out in a struct, so you know how much memory you're copying with every instruction. You always know when you're generating garbage, so you can take steps to avoid it when it matters. In general, everything your code tells the computer to do in Go is very obvious, so you can actually reason about what your code does on the small scale and not just on the big scale. And yet, it does this without the added unsafety of C, and without all the enormous complexity of C++. And it does that in a way that you just can't achieve with Java's "everything is an object".
That's why people are flocking to Go. Because it has Java's ease of writing with C's ease of tweaking, and yet fixes a lot of the inherent problems in both languages.
Ken Thompson states that, initially, Go was purely an experimental project. Referring to himself along with the other original authors of Go, he states:[11]
When the three of us [Thompson, Rob Pike, and Robert Griesemer] got started, it was pure research.
Pretty good article on some of the challenges of getting started with Go. Certainly the package management issues point to Go's youth.
I would have loved a more detailed explanation of why they chose Go for a particular problem space rather than C/C++/etc. What problems is Go particularly good for, and for which problems is it a poor choice?
I considered both Go and Dart for my new web projects - but as a former C# dev I'm too lazy to live without IDE (debugger intellisense etc) and ended up with Dart. Would love to see more tooling for Go to make it appealing for 'VisualStudio like devs'
I'm using DO to hack together some web pages to help me verify data for my mobile app. I'm thinking about building out the pages. However, at the moment I'm using a simple shell script to start.
#!/bin/sh
export PORT=8080
go run go/spanishdb.go go/Html.go
Any good blogs on using Go in a production ready server? Should I run behind Apache?
Your best bet is probably to put it behind nginx and use something like supervisord to launch the server itself and restart it if it panics (supervisor will also handle log rotation etc).
Here here for supervisord watching Go behind nginx. I have this set up in production; even using the nginx tcpproxy plugin to reverse proxy some RPC servers.
For us its a freaking story of awesome success. Go = ROCKS!