"Go is designed to discourage developing higher-level abstractions."
Yes, that does seem to be the distinctive thing about Go. It's almost like someone read Paul Graham's "blub" essay[1] and thought, "what would it mean to take seriously the idea that blub is the best language?"
Everyone knows that building abstractions has a cost - the cost of building the abstractions themselves, the cost of figuring out the particular abstractions employed in a given project, and the cost of comprehending a language flexible enough to support these abstractions. The hope is that the cost of abstraction is an investment: the time you put in will be rewarded in faster development when you put the abstractions to use. But at some point increased abstraction is going to give diminishing returns.
Now, most programs written today don't involve that much more abstraction than would have been possible with programs written in ALGOL; that is, we haven't seen a huge widespread increase in the power of abstraction used by most programmers in about 50 years. People like Alan Kay and Brett Victor[2] decry this stagnation, and maybe they're right to. But maybe the current low level of abstraction is so durable because it's a sweet spot between the benefits you get from abstraction and the costs involved in coming to grips with that abstraction.
Most people, particularly most people who develop programming languages, assume that we're nowhere near the point of diminishing returns for increasing abstraction. Go seems like an experiment to test the possibility that the maximum efficiency occurs at a much lower level of abstraction than we usually think. It will be interesting to see whether (or in what domains) that hypothesis turns out to be true.
I think what you are missing is that people do use abstraction if you offer it. Java or C# generics or even when C++ added templates. C++ templates where abused but it was done so often that is now standard.
If you look at code from the newer fancy languages people do use these fancy features.
Yes, that does seem to be the distinctive thing about Go. It's almost like someone read Paul Graham's "blub" essay[1] and thought, "what would it mean to take seriously the idea that blub is the best language?"
Everyone knows that building abstractions has a cost - the cost of building the abstractions themselves, the cost of figuring out the particular abstractions employed in a given project, and the cost of comprehending a language flexible enough to support these abstractions. The hope is that the cost of abstraction is an investment: the time you put in will be rewarded in faster development when you put the abstractions to use. But at some point increased abstraction is going to give diminishing returns.
Now, most programs written today don't involve that much more abstraction than would have been possible with programs written in ALGOL; that is, we haven't seen a huge widespread increase in the power of abstraction used by most programmers in about 50 years. People like Alan Kay and Brett Victor[2] decry this stagnation, and maybe they're right to. But maybe the current low level of abstraction is so durable because it's a sweet spot between the benefits you get from abstraction and the costs involved in coming to grips with that abstraction.
Most people, particularly most people who develop programming languages, assume that we're nowhere near the point of diminishing returns for increasing abstraction. Go seems like an experiment to test the possibility that the maximum efficiency occurs at a much lower level of abstraction than we usually think. It will be interesting to see whether (or in what domains) that hypothesis turns out to be true.
[1] http://www.paulgraham.com/avg.html [2] http://worrydream.com/dbx/