Python is high level language while Go is low, system level language (I think it’s fair to say it’s C 2.0). 95% of asciinema codebase is high level code...
I get this sentiment. There's an attitude among golang developers that things which are "trivial" to implement don't belong in the go standard library (even if they would see high usage). That, and Go's animosity toward generic code and syntax sugar make me feel like I'm fiddling with nuts and bolts sometimes.
For example, Go stubbornly excludes a math.Round() function from it's math package because "the bar for being useful needs to be pretty high" (this comment was followed by several buggy implementations of the method[1]). Go excludes integer math functions, and the lack of generics means I have to cast to float64 or write my own implementation everywhere.
Go’s lack of versioned packages and central repository makes packaging cumbersome.
One of the big headscratchers of the golang ecosystem. It's resulted in a million different package managers and ridiculous tools that rewrite all of your imports for you.
My favorite method of managing requirements now is glide[2] which pins all of your dependencies (to a git commit or tag) in a glide.lock file. Glide fetches dependencies into the vendor directory, but you never need to commit your vendored dependencies. Instead, commit the glide.lock file and glide can fetch everything for you.
I'm a Python programmer by day, and I would very much appreciate a stricter type system so it was apparent what arguments a function might accept, what arguments a function might return, etc without having to open up my browser and dig through documentation. I also love that Go doesn't have much in the way of syntactic sugar--it's absolutely obvious what everything does. For example, there's no way to overload `==` so it returns an object (looking at you, SQLAlchemy).
The lack of generics is the only real pain point I feel with Go, but it's not for trivial integer functions--rather for larger data structures, like various kinds of trees. Of course, it's silly to think Python "does it better", Python has no typing at all, which you can do as well in Go via `interface{}`, and there are data structures libraries which do exactly this, but you pay a performance cost (probably still faster than Python).
If you've a python programmer, you're used to high-level, succinct code. If you want a typed language of this kind, try Haskell, or Kotlin, or maybe Rust, or Nim, or even (gasp) java 8.
Compared to Python, Go severely lacks expressive power.
I think that's always been my problem with Go, it feels like it makes a set of trade-offs that really benefit nobody. It isn't low level enough to really unseat C (Rust stands a much better chance there), but it isn't high level enough to replace something like Haskell, Java, C#, or Python. Its type system doesn't seem too terrible until you run into the strange hole left by lacking generics, which considering that basically EVERYONE these days supports generics or a similar mechanism is a very strange stance to take.
TL;DR: Go is a language designed for nobody, no matter what niche you try to fit it into it's missing some feature that's desirable in that space.
> it feels like it makes a set of trade-offs that really benefit nobody
You. I'm extremely happy with its trade-offs, with the only thing I really miss being generics. Most other things people complain about Go I find refreshing like standardizing on vendoring, static compilation, no "expressive" bs to stroke a programmer's ego while producing potentially harder to read/debug code and so on.
Go is a general purpose language for rapid application development. It excels at concurrency, and it prioritizes consistency, simplicity, and predictability. Most everything is super straightforward, easy-to-use, easy-to-understand, etc. For example, most Go programs build with `go build` and no configuration files, deployment is just putting a single binary on the target system (no runtime dependencies, not even libc). In particular, Java, Python, and C# have complex build systems and runtimes, and Haskell is not in the same family of languages, so its learning curve is steeper for a lot of people--patterns and paradigms are fundamentally different, so it will take longer to see a return on investment. This doesn't make Go a "better language" in any objective sense, but those who value a no-nonsense, pragmatic language will probably enjoy Go.
>one persons 'lack of expressive power' is another persons 'this code is easy to read'.
Expressive translates to "easier to read".
Having 100 lines of very simple procedural statements to accomplish what an expressive language does in 5 higher level lines doesn't make the 100 lines easier to reason about or maintain.
At worse it makes reasoning about their memory/speed beahavior harder -- but when it comes to reasoning about their functionality and the correctness of your program it's orders of magnitudes easier.
there's defnly some truth to this -- more concise (via expressive power) code can be easier to grok, because it's easier to see the forest w/o so many trees in the way.
but on the other hand, having simple code where it's easy to point at each bit and say 'that does this, that goes here' can also make code easier to understand.
for me personally, on balance, the latter wins outweigh the former losses when going from python -> go. i'd say it's probably a small bump down in expressiveness for a large jump up in 'look and know what it does'-ness.
(it's not like go is really missing that much that python has -- maybe list comprehensions (which are great!), tuples, and generators?)
it'd be fun to do some empirical investigation of these things. take some toy problem, code up idiomatic, by-the-language-advocates solutions, and then throw them at large groups of people asking them to make some change, or fix some deliberately introduced bug, and see how the #s came out. :)
I agree. If I am doing something "complicated", that won't be obvious to a beginner, I put a comment beside it. Having two or three lines as opposed to 20 or 30 is often easier to read if you ask me.
Python can be very readable. In practice, it's much less readable than Go. Also, Python's lack of typing makes it very hard to follow type arguments (something I lament daily). I love that I can enter a few keystrokes in vim and see the type signature, methods, fields, documentation, or even the full definition. Python doesn't really have this (at least not unless your project is using mypy).
One of the side projects I've been working on attacks this in a different way; what if we efficiently trace every single call point at the Python and C level, including all arguments, such that we can build up a historic inference of types a given object label had at any point in time.
The problem with under-expressive code is boilerplate.
"You cannot pay people enough to carefully craft boilerplate code" (quoting the founder of Jane Street). When you glaze over while reading boilerplate, or wearily copy-paste when you write boilerplate, the chance to make an error grows significantly.
This used to be a problem with Java (and still partly is), despite static typing and top-notch IDEs.
It's sad when a new language suffers from that, too.
OTOH, it took Java 8 years to introduce generics. Go just need to mature a bit (not ironically; it needs to sort out less ambitious technical issues first).
I don't necessarily want "high-level, succinct code", I want something simple that compiles to static binaries with minimal configuration. Rust is most likely what I'll use in the future, but it's fairly low-level compared to Go, and the editor support pales. Eventually I think it will get there though.
Python can be friendly, but in practice people frequently use Python's magic features for everything. Go reigns people in, which can be a burden for some, but I appreciate the predictability.
edit: Also, everyone, and I mean ev-rey-one that writes more than 12 lines of Python code a day needs to be using PyCharm. Navigation, inference, type safe refactoring, pure gold.
> I'm a Python programmer by day, and I would very much appreciate a stricter type system so it was apparent what arguments a function might accept, what arguments a function might return, etc without having to open up my browser and dig through documentation.
While Perl5 is not a typed language, it does have addon type checking modules such as Type::Tiny that play really nice with addon (ahem) object systems like Moo. It doesn't look pretty but it works. Perl6 fixes that and also makes it pretty as you can declare variables as typed and the interpreter will happily complain when you do something like:
my Int $a = '3';
Type check failed in assignment to $a; expected Int but got
Str ("3") in block <unit> at <unknown file> line 1
Add-on type systems have the disadvantage of being add-on, that means that the vast majority of code will not have any type information associated therefore reasoning what code accepts or returns is just as it is without the added type system.
I was thinking of using Typed Clojure, but the advantages would unfortunately not translate to the libraries that I use.
You annotate your functions with types they accept and return. Run this tool and it shows inconsistencies. The more you annotate and the more precise the types, the more helpful it is. This is not Hindley–Milner types here (this is actually called Success Typing), but goes along way to do what you suggest i.e. you can have your cake and even eat some of it ;-)
Python now has mypy, that I understand is rather similar. Guido at least is rather fond of it. But it is still new and I haven't yet heard of it used in real world:
Not related to the language switch, but I just want to say that I love Asciinema. Thanks to it I do small screencasts where normally I would be totally discouraged about the amount of work needed in order to really record a portion of the screen and upload a video recording. For people that want to show most of the times terminal based things (i.e. programmers), it is a zero-friction way to show an interactive session. Thanks for building it.
I actually discovered it from your post the other day, antirez. The CLI thingy. And I concur. It is a truly wonderful tool I will be using extensively: Docker inits, first N steps in spinning up an instance, setup devenv and build first "hello world," and much more ;)
I'm not so fond of the fact that recordings are served through ascinema.org. There are alternatives like ttyrec + tty-player that work just as well and don't depend on external servers.
I don't understand this complaint about Go. Either a function can result in an error or it can't. If it can result in an error and you don't handle that error, your program's behavior is undefined. Undefined behavior is a bad thing.™
Go forces you to handle the error (or explicitly ignore it). That design choice results in remarkably stable programs.
I don't mind it, but I don't love it either. It's easy to write, but it would be nice if the compiler enforced error handling so that you did not need to run errcheck.
I completely fail at writing rust, but their implementation of error handling is much nicer.
Want to call a function, returning the error up to the caller on error? Instead of
If you need the function to succeed, similar to how go has some functions like MustCompile, rust has unwrap and expect:
let res = somefunc().unwrap();
or
let res = somefunc().expect("somefunc broke :(")
> Go forces you to handle the error (or explicitly ignore it).
But that isn't really forcing you to do anything. There are 3 ways you can deal with an error: handle, ignore, or panic. In many cases go programs ignore errors when they really should panic.
It is disappointing that the go "equivalent" that you often see in examples is
res, _ := somefunc()
This is not handling an error or explicitly ignoring it, it is pretending that the function never fails.
rust makes it impossible to ignore the error, while also making it easy to skip error handling if you just want to abort on failure.
if I could write code in go + rust generics + macros and not have to deal with the lifetime borrow checking crap, I would be so happy :-)
> if I could write code in go + rust generics + macros and not have to deal with the lifetime borrow checking crap, I would be so happy :-)
You might like OCaml. It's easy to write like Go but with a much more useful type system. Downside: concurrency is not as pleasant as in Go… at all. Then again, I see a lot of things written in Go that aren't particularly concurrent and just happen to use Go because the author wanted a moderately high-level compiled language with automatic memory management. OCaml is excellent for those.
In the same family, but with fairly nice concurrency (as far as I've gone with it) is F#. Even works on Linux, through mono now, and hopefully soon with .net core.
The explicit error check after every call actually doesn't provide anything useful. If any of foo(), bar(), or baz() fail, magic() is screwed, so why bother writing the same code three times? This shows that the proper place for the error handling isn't in magic(), it's in whoever is calling magic(), and yet, here' I am writing the same do nothing again, again and again in this function, and in all honesty, every function.
It's the multivalue return semantics that's the problem here. It's impossible to chain multivalue functions together to write anything succinctly. Worse yet, it's easy to simply ignore the error and create latent bugs because the first return value (which you're probably not ignoring) now has a "zero-value" which can be semantically valid. Whereas if I simply didn't catch the exception, it will be caught, perhaps by main(), but it will be caught, and I won't be populating my data structures with gibberish, which I can very easily do with Go.
"It's impossible to chain multivalue functions together to write anything succinctly"
Lua's standard error return idiom is: nil/false, error-string, error-code. The Lua standard library also includes a function, assert, which will throw an error if the first argument is nil/false. The value thrown is the second return value, which in the idiomatic case is the error string. If the first argument is not nil/false, then assert returns the entire list of values.
Thus _you_ can choose the best approach based on _your_ needs. The following are _both_ examples of idiomatic Lua.
Example 1:
local v, errdesc, errcode = foo()
if not v then
if errcode == 1 then
...
elseif errcode == 2 then
...
else
error(errdesc)
end
end
Example 2:
local v = assert(foo())
Example 2 (easy composition):
local v = assert(foo(assert(bar())))
You're not limited to that. Neither assert nor error are specialized built-ins. You could throw a structured object with, optionally, a __tostring metamethod for string coercion; or you could write your own routine that manipulated the values differently. But in idiomatic Lua it's not common for thrown errors to be caught and differentiated, and code usually expects the thrown value to be a string or coercible to a string. If it's thrown it's usually because there's nothing particular that can be done, and so usually it's caught at the boundary of some logical transaction. But Lua allows you put that boundary anywhere, and you're not limited to that idiom. In some cases you want more structured exceptions, and you can have them. But as a module author the expectation is that you preserve the application's freedom of choice, so modules usually return the idiomatic tuple whether or not it's recoverable.
There are some errors that Lua will always throw, like memory allocation failures. These can be caught like any error, and the Lua VM is never put in an inconsistent state. But they're an example of the kind of non-specific error where you normally only catch them, if at all, at some logical transactional boundary--an image transformation in a photo editor, or a client request for a web server. Lua is, notably, one of the few scripting languages where allocation failure is recoverable. And rather trivially recoverable, in fact.
EDIT: error() is a specialized built-in: it's how you "throw" in Lua. But it will throw any value, not just a string.
One of the reasons why Go is so unwieldy is because it is designed to be simple. Its very easy to mistake simplicity for elegance; the reality is that simplicity can lead to elegance, but it doesn't have to.
Go is an interesting language when you have a project with dozens of developers, because it explicitly, by design, does not let you introduce any concept of "your needs". Go is not ashamed of the fact that it has one way of doing things, and that way is rarely elegant. But it _is_ effective.
I agree with that assessment. It's impossible to disagree because the designers have been very explicit about their motivations.
My argument was only that multivalue error returns are not necessarily so limiting. Rather, IME they're a great compromise that can permit the best of both worlds. I always wondered why Lua's idiom was never picked up by Go. So much of Go seems lifted wholesale from Lua, or at least a shared ancestor. Particularly Go's goroutines, lexical closures, and how cleanly they interoperate; both of those constructs are much more limited in languages other than Go or Lua because of shortcuts taken to simplify the [pre-existing, broken] implementations (e.g. JavaScript and Python). But especially Go's exception mechanism, with syntax and semantics likewise almost identical to Lua's.
With some effort, it can be made less horrible. I have a utility function that lets me write it like this, whenever I can get away with doing steps after an error has occurred:
Yeah that kinda works, except you have to make sure that you add your error code to the end every time you add something. Also, you're still stuck with doing the check if your code doesn't quit hit this pattern.
It's even easier to not use exceptions either and let your supervision tree, process monitor, hardware heartbeat switch and/or night operator handle it.
This is the right answer really. In large at least that's how sane operating systems deal with failed components (processes). They crash, but then they can restart if monitored by a supervisor (systemd / init). On a higher level most places handle it as well at machine / node level -- auto-failover, auto-provision, health-monitoring, centralized logging. All that is the right answer to building concurrent systems, except most current languages except a few (Erlang / Elixir) are stuck in the equivalent of early 90's Microsoft Windows 3.1 environment.
Exactly -- I get annoyed by having to write 20 `if err != nil {`s when just one `catch` in another language would work. IMO, this is one instance of Go's maintainers conflating language simplicity with compiler simplicity.
Perhaps the sentiment is valid, but practically speaking, in Rust you do even more of the compiler's work for it, no? I suppose one could just cherry-pick an analogue of the `try!` macro into Go.
I would love it if there were a keyword to check if an error return value received was non-nil, and then return it and default values for other return types if so.
Same. I've some spots in my code where I return three or four values (there's no logical reason to put them inside a struct...) and so I end up with a handful of
if err != nil {
return "", 0, err
}
_all_ _over_ _the_ _place_. What I also don't like (perhaps I'm just being too uptight) is
return myStructType{}, nil
The alternative is to use a named return parameter (which usually is good form), but it's still kinda gross, especially if it's a longer package + struct name.
Nobody is suggesting to ignore errors. Checking every single return value in a block of code is a ton of boilerplate versus catching all the exceptions from that block in a single place.
Yeah but you aren't really capturing the difference fully.
If all of the exception handling code is at the end of the block, you lose context (which of the three file opens the this FileNotFoundException), scope (every variable you want to use during cleanup has to be declared outside the try, initialized to a sentinel, and then checked in the catch to be sure the try initialized it to the real value), locality (the code, the error handling, and possibly the finally block all deal with the same things, and should be in sync, but are spread out), and more...
Error codes are verbose, but exceptions have issues to.
Neither one is ideal for me but error codes are the lesser of two evils for the time being.
Sometimes you don't care about the context. You can have middle-ware code: [top level code]<->[your code]<->[library]. Maybe you don't care if library throws a "disk full" exception or "network down", littering that part with "if err != nil ... " might not be the best solution. Top level code might decide to log the error, retry, retry and log, send a page, or even just let another layer on top of it decide.
But as you say it is a trade-off. At lest Rust provides a neat try! macro, it goes a long way in handling boilerplate.
let mut f = try!(File::open(filepath));
let a_byte = try!(f.read_u8());
Perhaps Go would provide a source code transformation tool, to automatically expand code to do the equivalent.
Depends on the language for me. If I'm in Java and need to put try-finally blocks for cleanup of non-memory resources, then error codes are cleaner. If I'm in C++, with all objects using RAII to clean up after themselves, then exceptions are the simple way to go.
Here's a paragraph from Rust's book on error handling [1] (written by BurntSushi who write quite a lot of Go too):
> You can think of error handling as using case analysis to determine whether a computation was successful or not. As you will see, the key to ergonomic error handling is reducing the amount of explicit case analysis the programmer has to do while keeping code composable.
Yes, handling errors is important and should be done, however it need not result in a block of text that people just skim over and that is easy to get wrong.
Is there a more elegant way to handle errors in a concurrent language than returning error values? In general, exceptions do not work well within async programming. Too many JavaScript developers use exceptions and I advise to return errors instead. At least Go is consistent.
Exceptions don't work fine in Python for single-threaded apps. Everything throws, and almost nothing catches. User provided malformed JSON? Hit 'em with an HTTP 500.
Severe undefined behaviour by default doesn't appear to be a good language design choice.
Python is safe by default because you don't need to actively look for errors: when an error happens and there is no exception handling code you get a traceback, and the only way to ignore an exception is explicitly writing an (in)appropriate exception handling cause.
> Either a function can result in an error or it can't.
The problem is handling the error every time the function is called in the same place it is called and adding extra visual noise to the business logic.
Other ways to handle errors:
* Throw an exception. Program for the default happy path and throw an exception if something ... exceptional happened. This way code is not littered with 50% error handling which makes hard to read understand what it should be doing "normally". There is a trade-off there. But this has been implemented badly in other languages C++, Java and Go writers probably looked at that thought "No, way" and threw that idea away. I think Python does this right and it greatly goes to simplify code.
* Crash the goroutine (panic?). It is a bit like an exceptions but it applies to systems with concurrency units (goroutines, threads). However, in general that might not work as well because of two things -- shared memory, you can't just restart or assume a crashed goroutine hasn't scribbled over the heap memory, somehow or left data in an inconsistent state. And the other thing is goroutines don't have identifiers. so you can't say, I want to know if goroutine x crashes, then I want make sure y crashes as well, or I want to restart goroutine x and try again. But, that is not Go, that is Erlang / Elixir. Once you have seen process monitoring, linking, supervision trees and how they result in shorter, clearer code and less operational pain and suffering, it is hard to go back.
(Also if this sounds so alien and crazy, think about OS processes. A program has a pid. You can kill it using a pid, restart it. You know it hasn't scribbled over memory and messed up with other programs. This what sane operating systems do it has been this way for many decades, early Unix and then NT on windows. Other languages basically behave like Windows 3.1, which is cool, but is 1995 cool, not 2016 cool).
Should also mention Rust probably as well. Rust does a good job handling a lot proving there won't be data races at runtime between threads. Restarting a thread there is not as dangerous. But from what I guess still kind of awkward in the code. But even for the case explicit error handling, it has the try! macro which goes a good to visually simply the code. It does this because it decided that functions can return an agreed Result type which can be an ok value or an error. If it returns an ok, it passes the value along to the code that called, it if gets an error it returns early with the error. (Here it assumes both the caller and callee return Result. I don't know much about Rust so this might all be wrong, if anyone knows better please correct this ^).
I haven't write in Go, but I have written in Lua, which has a similar error handling convention.
I don't understand why "Go forces you to handle the error (or explicitly ignore it)". I find [a sample program](https://gobyexample.com/writing-files) and remove all the error checking code. Nothing unlike in Python happens. It compiles without warnings (even C can warn you about not using returned values).
I can see how Rust forces people to either handle the error, or explicitly ignore it. When you forget about it, it just doesn't compile.
>Not any more than you can forget about the returned error code in Go (val, _ := foo() etc).
I'm pretty sure that's an underscore right there, that very explicitly means "I'm consciously ignoring this error". Not sure what you're talking about with Optionals and Conditionals. Errors in Go are in the end structs that satisfy the error interface. That doesn't have to be an errors.Error, you can implement your own to satisfy more complex needs.
Let's say you're doing something with the result of http.Get, because otherwise it makes no sense. In Go, you'd have something like:
resp, err := http.Get(someURL)
if err != nil {
return nil, err
}
res, err := DoStuff(resp)
if err != nil {
return nil, err
}
return res
In Haskelly-Go, you'd do:
do
resp <- http.Get(someURL)
DoStuff(res)
The types of both http.Get and DoStuff would be monadic, and the result of the do block would also be monadic. When you eventually get to the place where the error is significant, you handle it there when "unwrapping" your value from the monad.
Yeah, but if you had 5 operations in Haskell, you'd write:
do
a <- operation1
operation2 a -- This one returns nothing
b <- operation3 a
c <- operation4 b
operation5 c
In Go, you'll keep repeating that 'if err != nil {' everywhere. And things get way more interesting with more complex monads, but error handling is simple.
Do-notation is sequenced evaluation, where the result of each statement is passed as the argument to the next (a simplification, but good enough). In this case there are two sequential operations and if either of them fails then it automatically propagates and results in the whole sequence returning a value that indicates failure. In Python, an (approximate) analog would be like this:
from math import sqrt
def f(a):
if (a >= 0):
return (True, sqrt(a))
else:
return (False, None)
def do(a):
(ok, y) = f(a)
if ok:
(ok2, y2) = f(y)
if ok2:
return (True, y2)
else:
return (False, None)
else:
return (False, None)
# example
for x in [2, -2]:
(ok, y) = do(x)
if ok:
print("Result: {}".format(y))
else:
print("Calculation failed.")
# Output
>>> Result: 1.189207115002721
>>> Calculation failed.
This sort of thing is super awkward in Python, but is a common/natural pattern in Haskell (note: I realize this is not how you'd do this in Python, but I'm trying to make it equivalent to the Haskell). This does the exact same thing as the Python program above and is a more complete version of what the OP posted:
import Control.Monad (mapM_)
{-
Maybe Double represents a computation that either
returns a Double or fails. In this case, fail
when passed a negative number. Note: this is not
the same as throwing an exception.
-}
f :: Double -> Maybe Double
f x
| x >= 0 = Just (sqrt x)
| otherwise = Nothing
-- Fleshed out version of OP's snippet
eval :: Double -> Maybe Double
eval x = do
y <- f x
f y
{-
do notation desugars to monadic binding; in this
case eval can be desugared to the equivalent definition:
eval x = f x >>= f
-}
-- Here's why this is convenient, because you can pattern match
printIf :: Show a => Maybe a -> IO ()
printIf (Just x) = putStrLn $ "Result: " ++ show x
printIf Nothing = putStrLn "Calculation failed."
-- This is basically the equivalent of the Python for loop
main :: IO ()
main = mapM_ (printIf . eval) [2, -2]
I know Haskell syntax is strange looking if you're unfamiliar with it, but it's mostly geared towards making certain patterns (like this one) convenient.
The operation is called "map". In Haskell it is the same as the "fmap" function, for historical reasons.
By the way, is this the Civilization III armor? I never knew what model they got the image from. It is British... I always expected it to be German for some reason.
Those languages have patterns that make Result easy to handle, without introducing exception handling. For instance, Rust has "try!", and is introducing the new "?" syntax, to turn an error Result into an early return, and make code with robust error handling feel like straight-line code.
> Python is high level language while Go is low, system level language (I think it’s fair to say it’s C 2.0).
I thought we killed this notion already but it still seems to be lurking around. It doesn't support volatile, doesn't let you specify what goes on the stack vs heap(or pin anything for that matter).
Great for web services? Sure. Low level C replacement? Nope, for my money that's Rust.
C doesn't allow you specify what goes on the stack or heap, either. The "stack" in C is called automatic memory. What differentiates automatic memory is that it's automatically released when it falls out of scope. But it can very well be allocated using the same machinery as malloc. Yes, it behaves in some sense like a logical stack, but it needn't reside in some sequentially allocated storage, and the order of allocation or deallocation of objects in the same scope is completely undefined. Indeed, clang supports a mode with two "stacks", so that buffer overflows of automatic array variables can't easily overwrite function invocation bookkeeping.
Go is a fully GCd language with lexical closures. All memory is automatically released when it's no longer _referenced_. So the distinction is irrelevant.
As for volatile, it doesn't really do what most people think it does. volatile prevents _logical_ loads and stores from being moved. What that means for when you're dipping into assembly can vary from compiler to compiler. For typical code the volatile qualifier is only relevant when using setjmp/longjmp, for signal handlers, and in the C11 standard, for some kinds of threading.
In design and spirit I think Go is very much the successor to C. Though, I've never written a line of Go in my life and code primarily in C most days, so I'm only half informed. But if you look at the history of both C and Unix, they were never about performance, per se. What distinguishes them is ease of implementation, ease of portability, and a philosophy of achieving _easy_ performance gains by shifting a subset of hard problems onto the caller (the so-called "Worse is Better" theory). Thus, C was designed so that it could be compiled in a single pass (without an AST). Similarly, one of the reasons the Go compiler is so fast is because of very intentional language design decisions. Those approaches result in all manner of unintended consequences in addition to the intended consequences, and it's important not to conflate the two.
I don't think you can conflate intent with real world application. Certainly they didn't intend for Javascript to take over the server-side world when they built the language.
I don't think you can compare a GC memory model with heap/stack. There's a sliding scale of tradeoffs that you get from a fixed stack -> heap -> GC. There's also a set of hardware out there that doesn't have a unified memory model. There's millions of these types of devices out there in your set-top box, game console and many other places. If you don't have direct control over memory you'll never be able to work with them and will always need a C/C++/Rust wrapper around the low level bits.
Lastly, I'd never want to use volatile for threading[1]. There's no memory barrier semantics, instruction ordering semantics(aside from with other volatile reads). If you need thread synchronization you should to use the platform specific atomics otherwise you're in for a world of pain(and god help you if you're going from x86-win32 -> ARM, there's an implicit memory barrier in x86-win32 that most people don't know about).
[1] Note that volatile variables are not suitable for communication between threads; they do not offer atomicity, synchronization, or memory ordering. A read from a volatile variable that is modified by another thread without synchronization or concurrent modification from two unsynchronized threads is undefined behavior due to a data race. - http://en.cppreference.com/w/c/language/volatile
volatile doesn't offer atomicity, but sig_atomic_t does. You can use sig_atomic_t between threads, but in most cases you need the volatile qualifier as a kind of memory barrier. The guarantees that "volatile sig_atomic_t" provide are very weak and not something you'd normally depend on (especially when you have access to POSIX, C11, or other native atomics), but can be useful on occasion.
I've used it a few times when a library needs to initialize some [very simple] global state, but where I didn't want to require the application to always link in libpthread. Even on Linux that can be an issue for dlopen'd modules. glibc has bugs when late binding libpthread, and some BSDs don't even pretend it works (it just fails loading outright).
> In design and spirit I think Go is very much the successor to C.
It was supposed to, but quickly after release they "pivoted" (is the correct usage of the term?) by rephrasing a bit what "systems" are in "systems languages". I don't quite believe that out of all people, Rob Pike, didn't know what "systems languages", but anyway, it was meant to be replace C, C++ I think, but ended up attracting more of a Python, Ruby and some Java crowd. A lot of Java people I know who ended up using it, had mixed feelings, mostly due to lack of generics (sorry that is a belabored point and mentioning it will probably leads to banning from go language forums at this point). A lot of Python people I know enjoy Go, mostly to do better perceived performance and also ease of deployment.
Go is far lower level than Python, which makes the term "low level" fine. Its like calling some color swatch "black" then arguing "that's not black _enough_ to be black". Any definition we give for an absolute term like that will be fundamentally arbitrary, but we know right from wrong when we see it.
A language does not have to replace C to be "low level". You sound like the type of person who, thirty years ago, would argue that C isn't low-level because it isn't assembly.
Notice I'm being careful to mention semantics not performance(although those semantics largely help with the latter). The ASM<->C bridge is much smaller since you're talking about registers and possibly custom instructions that are usually exposed via intrinsics.
For me the dividing line is if your language abstracts away memory management. If I don't get control over where memory lives and how I access it not low level.
PC is this nice homogeneous memory structure but there's many systems that have specific semantics around different address spaces and you may need to be able to access and manipulate them in specific ways.
Yeah. As a side note, I remember what Steve Klabnik once said: "Basically, Go is like a lower level Ruby or Python and Rust is like a higher level C or C++".
> I thought we killed this notion already but it still seems to be lurking around.
I don't have experience with Go beyond reading some snippets, but the Asciinema developers clearly developed with it and they seem to think it so. It's not some notion going around but something an (at least moderately) experienced developer says.
I'm not saying they must be right, they might simply be more experienced in Python and prefer Python because of that, but I don't think it should be dismissed as "some notion that's still going around".
Until you can load up Go code on a microcontroller, or build an OS kernel in it, it is not going to replace C in many circles. Go is also garbage collected, which rules it out for being low by some definitions of "low level".
Low level is a very ambiguous term, for some it simple means anything to do with CLI. For others, it means kernels, drivers, etc.
Yeah, not knocking their choice in Python. I think they made the right evaluation there.
It's more that Go keeps getting held up as a replacement for C without understanding what makes C/C++/Rust viable in their domains. It's all down the the memory model and if you don't have control over that then it isn't a replacement.
Take volatile for instance, there's certain pieces of hardware that you'll never be able to use Go for without wrapping some parts of the hardware in C because there's fundamental memory IO semantics that volatile provides but Go doesn't support.
Very many developers today do not fully understand what low level means, so very many things get called 'fast' or 'low level' or 'C-Like' that really are not.
As a user, I'm afraid this means I'm unlikely to use Asciinema anymore. They've left me with no acceptable installation option. Go get was clean and installed a nice, statically-linked binary. The python ecosystem is messy. Not even considering the disaster that is version 2 vs 3, an installed python program is just harder to manage as files are strewn across many directories that are difficult to understand without understanding python development. This may be fine if you're writing an application and you spend enough time with it to understand the structure and purpose of the installed files, but as a casual user, it sucks. Brew isn't much better as I've continually run into problems upgrading packages. I love the idea of nix, but gave up on it after my first few installed programs all didn't work for anything involving SSL.
I know it wasn't intentional, but this change feels like a huge FU to users. If I'm forced to use Asciinema again, I'm going to have to resort to using it from from within a Docker container where the mess is, at least, sandboxed, but that option has its own drawbacks.
99% of asciinema users don't care about Go, Python, static binaries and other details like this. They're happy they can 'brew install asciinema' or 'sudo yum install asciinema' and just use the tool. You're beeing too religious about it.
Yep. This. I don't write Python, don't know the ecosystem, so when something doesn't work out of the box I'm screwed. Go on the otherhand I grab a statically linked binary and I'm set.
I use Deluge bit torrent client which is written in Python & GTK+3 , never had a problem (Windows) it's perfectly possible to deliver self contained applications with Python,or anything else.
I don't know much about how this works on *nix, but on Windows it's not super difficult to bundle a Python application with py2exe and distribute that.
I'm left wondering why the switch was made in the first place. I spend about half my time coding in Go and love the language, but if I wasn't looking for easier concurrency or a speed boost, I wouldn't re-write an existing code base in Go. It's a lot of work to do just to revert a short time later.
The majority of comments are fair, although I'd disagree with it being C2.0 and err != nil getting old - I much prefer it to exceptions.
I know of two cases where companies picked Go because it was in the headlines on HN ("we switched from X to Go" stories were pretty common a few years) and also "because Google supports, so it is going to be huge". Now two years later, there is regret and gnashing of teeth. Mostly due to now having to deal with new errors, not everyone knowing Go well enough, destabilized working code. One company loves ease of deployment though -- a single binary. Can't argue against it there.
I feel like you can't always handle every error. For instance if you're using a lot of third party libs that you aren't entirely familiar with, you may not be familiar with every possible error a given method call can throw. To complicate the issue error messages are often cryptic, to the point where even if you do catch it it is an entirely pointless exercise.
In this situation I feel like try: except Exception as e: is infinitely better then a half-dozen if err != nils.
Most of the time I end up writing if err != nil { return nil, err }. This seems better than a simple panic to me it allows the function to return, executing any deferred functions and doing any necessary cleanup.
If I had any complaint it would be that go could offer some way to setup default handling that simply returns zero and error for any unhanded errors.
I know the same thing can be done with try, catch finally but then you have the same problem of needing to know what errors may be returned so you can make sure an wrap that call in a try statement.
Go was essentially designed with the idea that you shouldn't ignore errors. However, if you really don't care about errors, you can simply ignore them:
That ignores the error but then silently does the wrong thing when one is thrown. That's different to exceptions where you bail the function instead of proceeding as if nothing went wrong
I both prefer error returns to exceptions and find Go's implementation of error returns tedious. There are many other possible implementations that avoid the constant boilerplate. I've seen alternatives that exist in other languages and ideas for a syntax that might make sense in Go elsewhere in this thread, and there are surely others elsewhere.
It's funny, when using C++ or Rust, Go feels like my version of Python/Javascript/Etc. There's something very appealing of it to me.. it's the NodeJS of the Typed world to me, and i love it for that. Granted, i'm trying to switch my codebases to Rust, but still - i can't imagine going back to Node/Python/etc. But to each their own, i'm not foolish enought to say i'm "right".
With that said though, i have a hard time understanding complaints like:
> if err != nil { gets old even faster.
I may be biased, because my time in frontend JS land, but i love checking errors every time. It's a language feature to me.
Ignoring errors and expecting something else to care and catch/handle them is just.. worrying to me. Likewise,
try
.. stuff ..
catch
.. stuff ..
Gets far older to me than `if err != nil {`. But that's just me i suppose.
We did. But you see, theoretical analysis doesn't always produce accurate result and you won't learn about practical issues until you get your feet wet. Also, like I said in the post, it turned out to be not great choice for combination of this particular project AND me.
> Batteries included: argparse, pty, locale, configparser, json, uuid, http. All of these excellent modules are used by asciinema and are part of Python’s standard library. Python stdlib’s quality and stability guarantees are order of magnitude higher than of unversioned Go libs from Github (I believe discrete releases ensure higher quality and more stability).
Go also has most of the listed libraries(like http, arg parsing, json) included in stdlib. I'd argue that the the http library in Go is one of the best out there :)
Can't disagree about dependency management, hopefully it gets addressed sooner rather than later. There was a good discussion with the Go team on the topic at Gophercon today.
Dependency management is what drove me away. I drank the Kool-Aid; had the language specification printed and studied it with care. What a wonderful, concise language. Then I tried to actually use it for something. That something involved a few large dependencies and although I was personally able to cope with the dependency problems, after about six weeks I concluded I was wasting time; there was no way I would ever advocate to an employer or co-worker this platform. Solid, easy dependency management and easily reproducible builds are non-optional, and Go doesn't provide either without a bunch of legwork. The folks behind Go have heard this a thousand times and it has done nothing to persuade them to deal with it. That's a no-"Go" for me.
Re: dependency management: do people not know about Godep, or am I vastly overestimating its usefulness? Because so far, one of my favorite things about the language is that I can freeze things at the commit level and in the end it's all git so if I need a specific tag I can just grab it and re-Godep. For me at least,Go's idiosyncrasies are far and above more pleasant to deal with than, e.g. Python dependency hell.
> Re: dependency management: do people not know about Godep, or am I vastly overestimating its usefulness?
How many Go package managers out there ? that's right that's the problem. Why invest in this one when most of the community don't use it, don't even tag their releases properly, and frankly just don't care about anything not in the core library or not written by Go maintainers? this is a fucked up situation. I read in this thread that Go is "battery included", well
no, it has a few percs and a lot of issues.
Go maintainers "We are very opinionated about things but somehow we let the community handle package management because we delivered an half backed one (go get) that fits our needs and we don't care about yours" are a joke.
I mean, in my opinion a package manager creates as many problems as it solves. Look at pip or npm, both of them are huge, messy, buggy monstrosities that only further complicate packaging in their respective platforms.
Maybe Go would be well served by a well written package manager. Maybe. But I've never seen one that makes the process unequivocally better. go get really is good enough for the simple case, which is, 99% of the time, what people want.
For me, huge complex dependency graphs are a big code smell. I get concerned whenever I see a program that pulls in lots of very large dependencies using a very fragile mechanism (example: Pip/setup tools) it bothers me.
My question is, what would "fix" the situation for you? A pip-style package manager? Because go get has all the important features (in my opinion) without all the pain that comes from such a complex tool. I admit that a lack of semantic versioning is a major pain point right now, but that's only because the go community hasn't found a good solution, in my opinion. But I've actually been working on that particular problem and I think I've nearly solved it. It's just a matter of time before someone figures it out.
My experience with finding and using a good, terminal-based progress bar library led me down the road to fixing some termios-related issues[0]. I've yet to find many great Go libraries for abstracting the terminal in a cross-platform way.
That was my initial reaction too, however I think the author was listing _all_ of the libraries used by Asciinema, rather than just the ones they had to find outside of the standard lib. Their point being that Python includes them all, so there's no need to go external.
> Batteries included: argparse, pty, locale, configparser, json, uuid, http. All of these excellent modules are used by asciinema and are part of Python’s standard library. Python stdlib’s quality and stability guarantees are order of magnitude higher than of unversioned Go libs from Github (I believe discrete releases ensure higher quality and more stability).
Worth noting that Go has `flag`, `json`, and `http` in its standard library, and they're all much higher easier to use than the Python equivalents. In Python, you unmarshal your JSON into a dict or list and then write a function to convert it into the right object while making sure the structure is correct. With Go, the library does the right thing out of the box.
'encoding/json' is definitely more powerful than Python's JSON package. 'http' is similar.
But I will say that 'flag' isn't even _close_ to being comparable to argparse. flag is, like, the bare minimum you would need to write a command line application that accepts flags and generates help text. There are dozens of libraries for Go that more closely resemble argparse because flag's capabilities are so limited.
only downloads and installs the code from the default github branch which is now Python codebase. Fortunately they keep the golang code in a golang branch.
Although my repo is just a sandbox to study how to explore and learn bleeding edge technologies, unfortunately it also shows how fragile our github centered software ecosystem is becoming.
It seems to me that Go has very useful runtime features (speed, concurrency & parallelism, easy deployment) but the language has significant drawbacks, including those listed in this article.
The Python runtime is relatively poor (slow and limited by the GIL) but the language is extremely popular [1].
Is there any other language which is a "best of both"?
It seems you're looking for a language/runtime that's 'extremely popular' and has very useful runtime features like 'speed, concurrency & parallelism'?
This constrains the space quite a bit. Languages significantly less popular than Python are excluded by definition [1]; languages subscribing to very opinionated design decisions or too low-level (like C) are also excluded by definition.
This only leaves Java, C++, Python itself, and Javascript. If you run Python on a JIT runtime like PyPy [2], you'll probably achieve significant speedup.
Erlang, Elixir, Scala, Clojure, languages that also have good approaches towards concurrency are not very popular in the greater scheme of things.
To clarify, I'm not necessarily looking for something "extremely popular". I was using the fact that so many people consider Python to be their favorite language as a proxy for the high quality of the language design. Perhaps "popular" was the wrong word?
So, thank you for listing some less-popular languages as well. Unfortunately, of all 8 languages listed in your post, is C++ the only one which can produce standalone binaries and thus match the easy deployment of Go?
"If you’re building in-house software that has to run only on single platform then many of the above points may become non-issue for you."
I find this quote quite surprising if he speaks about Go.
As long as your customer got Python (with the right version) and all of the dependencies installed, everything is fine. But if you bring 3rd party dependency (say.. requests?) you are in a world of pain. If you add non-pure python dependency that's even worse.
While with Go, you just need to compile your stuff? (assuming your libraries support your target system)
Often I encounter similar instances where something fundamental is missing from golang's standard packages and no third-party library exists.
Authors cite superior support for tty on variant archs. But by using golang as the central pipeline manager, calls to jsdom or PIL or ffmpeg simply become another stage in the pipeline. Any number of Python microservices can be composed, while retaining golang as the glue providing timeouts, sync, etc.
Still, whatever works for the Asciinema is OK in my book. Great service and will be recommending to all!
Each time I read a post like this I just think in a new name for it "Why I didn't understand Go and now I have to comeback to my previous programming language ..."
> Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.
And as that comment correctly reminds us, .get()/getattr/etc exists for that very reason and I will almost always reject a code review that trys such an operation instead of using .get(). Don't do this:
try:
bar = dictionary['foo']
except KeyError as ex:
# handling
Do this:
bar = dictionary.get('foo')
if bar is None:
# optional handling, None might be OK
Or this:
bar = dictionary.get('foointeger', 0)
# now you don't even need handling
You avoid scope issues with the try block, stack unwinding, all sorts of issues. This works for attributes too with getattr(). It is quite easy to work in Python without try/except, for the most part, especially if your types are well-built.
That addresses the specific case mentioned, of course, but not others. Occasionally you do have to handle exceptions but this almost always revolves around I/O, and you can isolate those portions in well-defined types that expose functionality to the rest of your program instead of trying every few lines.
You're confusing improper lack of abstraction with error handling methodology.
The place to use EAFP in this example is in dict.get itself:
def get(self, key, default=SENTINAL):
try:
return self[key]
except KeyError:
if default is not SENTINAL:
return SENTINAL
raise
(made-up implementation ^ I'm not sure if Python's stdlib actually does exactly that)
I absolutely love Python's exception handling, and I strongly subscribe to EAFP (not just in coding, even), but I rarely use try/except, because most of the time your business logic (where I spend most of my time) should be delegating to something (external/internal library, etc.) that handles exceptions, rather than littering your business logic with low level logic.
I'm not confused about anything aside from your claiming I'm confused and then agreeing with me in your conclusion.
I'm reacting to the explanation of EAFP using that example. And no, Python's implementation does not do that, since (a) I'm almost positive dict.get is not Python and (b) your code is quite obviously incorrect.
Yes, I'm quite sure, since your code above contains "return SENTINAL" and you fixed the bug in your link. (It's also sentinel.) It's doubly incorrect even in a specification sense, since dict.get defaults to None by returning NULL (it is implemented in C in CPython[0][1], and I'm assuming you can read those references based on your assertion of skill) and no sentinel is used or needed. In your Python translation, that'd be defaulting to None instead of a sentinel. It's triply incorrect because dict.get also does not re-raise a KeyError; it always returns something barring a runtime error of some kind or asking for a key that cannot be hashed:
>>> {}.get({}) is None
TypeError: unhashable type: 'dict'
>>> {}.get("key") is None
True
>>> {}.__hash__ is None
True
>>> "key".__hash__ is None
False
Although really, the C implementation does this, indirectly:
def get(self, key, default=None):
return self[key] if key in self else default
The reason I say indirectly is because (a) there's no exception handling in use in the C case and (b) there's no equivalent to the hash table lookup failing when written in Python, so it's an inexpressible concept. That Python will search twice, while the C does not. The KeyError except is a nice analog, but CPython does not futz with stack frames at all if the key lookup fails, so it's not directly comparable to your version. My final version is the closest you'll get to translating what Python does to Python.
If you're speaking with someone new, you can learn a lot more by not condescendingly assuming competence of the other party, particularly when they correctly spotted those three problems and you didn't. I trust this reply will alleviate the misconception.
I don't have time to read your reply yet, for the same reason I didn't have time to waste on typos, etc., because I'm on my phone (traveling through Europe, no notebook), but it's clear your ego is preceding you, and thus you won't learn from my comment anyway. The others that upvoted me did.
I don't know what typo my first example has (the bug you mentioned), but the point is to teach you about the value of using try/except in abstraction, rather than littered throughout your code.
TBH, this is what happens when an immovable object meets an unstoppable force haha. I saw the rest of your comment above now, and I realize (please bear in mind, I was typing on my phone while walking through noisy Praha, after some wine) that I was thinking of getattr while I was typing that. I also scrolled up to realize the bug you mentioned (re: returning the sentinel value). Also, my gratuitous use of the misspelled "SENTINAL".
Despite our alpha mentalities, we are not worlds apart, truthfully. My original response came across sounding hardliner pro-try-except, anti-anything-else, but my point was only that people abuse exception handling in their primary business code (e.g., directly handling a network IOError in a Django view). As a result, they have a bad time. Libraries and utils should be relegated to exception handling duties like that. When following principles like developers love Python's try / except blocks for their clarity.
No, that's not "what happens haha." Those are several lame and, frankly, childish excuses instead of just owning that you arrogantly and condescendingly misjudged my competence and inappropriately spoke down to me like you were the universe's gift to comprehension of error handling in Python. You should learn from the experience and assume at all times that you are not the most skilled person in the conversation. I am never the most skilled person in any conversation, which, ironically given your assertions about me earlier in this thread, allows me to learn quite a bit.
Do not project your "alpha mentality" onto me, please. I am the polar opposite of "alpha." That was all you.
You were wrong, and I didn't stoop down to assuming anything about your competence as you so desperately tried to get me to do. Instead of blaming wine like a child, you should probably objectively look at yourself and your judgments of people. Your attitude is not uncommon, and I specifically look for it when I interview.
I'd have a lot more respect for you if you'd own the mistake, but I can tell that you're not going to, so this will be my last comment in this thread as well.
I thought about that, but i both used it and provided a link which is the expanded form. Granted, it's in reverse order, but /shrug.
Also, i did not complain at all - i was confused and posted it to help others. You start off assuming i had negative intentions, and i don't appreciate that. I was simply helping.
IMO Golang is a C-like language toted as a Python-like language. I recently replaced a hand-rolled task-queue processing service from Go to Python + Celery and its yielded large gains so far.
Granted I should've done it with Celery to begin with, but I think that's part of Go's problem it lures you into using it when you probably shouldn't.
Go should be faster as a queue worker in almost all cases. If you implemented the actual queue in Go that was likely your mistake, building queues is hard. If you had defered that to something like que (Postgres advisory lock based), Redis or even AMQP it should have run circles around Python.
> they don’t like vendored (bundled) dependencies (also understandable) (Gentoo example)
I don't understand the rationale here. Why would a linux distro reject a package that incorporates all of its dependencies, thereby becoming dependency-free?
Exciting to read this, not just because of how interesting and rare it is to hear about a project not just switching languages, but from Go to Python, but to learn that the author is invested enough in asciinema to do such an overhaul. It serves a niche purpose but it serves it really well, especially for teaching code. The recent development of a self-hosted option for asciinema playbacks made it especially useful to me.
I've been coding primarily in Go for the last year or so. Previously some Ruby and lots of C/C++. I actually don't like coding in it very much. It's far too much typing and boilerplate stuff. I'd love to just be able to handle exceptions and move on with my life instead of checking for errors on every other line.
> I'd love to know why they decided to move in the first place?
They drank the Go Koolaid, like many others, and then came to their senses. Go isn't a silver bullet, it's a trade off,like everything else. Speed is nice. Go is fast. But when you're codebase becomes full of "if err!=nil {" , it starts getting ugly.
Go has showed there is a place for a safe C in Python's clothes. Go just doesn't fulfill that idea though, the language is too rigid to be enjoyable for a lot of people.
I get this sentiment. There's an attitude among golang developers that things which are "trivial" to implement don't belong in the go standard library (even if they would see high usage). That, and Go's animosity toward generic code and syntax sugar make me feel like I'm fiddling with nuts and bolts sometimes.
For example, Go stubbornly excludes a math.Round() function from it's math package because "the bar for being useful needs to be pretty high" (this comment was followed by several buggy implementations of the method[1]). Go excludes integer math functions, and the lack of generics means I have to cast to float64 or write my own implementation everywhere.
Go’s lack of versioned packages and central repository makes packaging cumbersome.
One of the big headscratchers of the golang ecosystem. It's resulted in a million different package managers and ridiculous tools that rewrite all of your imports for you.
My favorite method of managing requirements now is glide[2] which pins all of your dependencies (to a git commit or tag) in a glide.lock file. Glide fetches dependencies into the vendor directory, but you never need to commit your vendored dependencies. Instead, commit the glide.lock file and glide can fetch everything for you.
----
1. https://github.com/golang/go/issues/4594#issuecomment-660733...
2. https://github.com/Masterminds/glide