Hacker News new | past | comments | ask | show | jobs | submit login

I would be curious to see what the performance is like in Go. I've been experimenting with it a bit lately, and coming from a mostly C and Python background, I have thus far found it easy to pick up. I haven't tested the performance for my own use cases very much yet, but I am told it compares very well.

I think the real challenge for scientific computing (I'm a graduate student, so this is most of the programming I do) is that there is already a huge network effect around NumPy + SciPy + matplotlib and friends. Golang just doesn't quite have the library ecosystem yet, although gonum[0] shows some potential.

In my limited experience so far, I think Go is good in an environment where most people are going to have experience with C and/or Python. It also makes it much harder to write truly crappy code, and it's much easier to get a Go codebase to build on n different people's workstations than C.

Having written a lot of Python, and relatively little Go, I think I would prefer to write scientific code in Go if the libraries are available for whatever I'm trying to do.

It's also much easier to integrate Go and C code, compared to integrating C and Python.

0 - https://www.gonum.org/




Personally I think numerical code should still mostly be written in C++, right now it still has by far the widest choice of options for doing so. It is also relatively easy to interface with python. For example both xtensor, libtorch/ATen, arrayfire have straightforward python interoperability via pybind11.

Finally no other language except for maybe FORTRAN has seamless parallelisation support and first class low level numerical primitives developed by vendors. Sometimes you will get a massive performance increase by #pragma omp parallel for.

Even for visualization some python libraries will suddenly fall off a cliff (Altair) once you reach a moderately large number of datapoints.


I would definitely agree that it depends on what kind of scientific computing you are doing.

For big numerical stuff and things that need to run on supercomputers, C/C++/FORTRAN are definitely very relevant and I don't see that changing. Likewise for edge stuff that has to run on bare metal or embedded, I think we're still going to be using C/C++ for a long time to come.

"Scientific computing" is a huge range of different use cases with very different levels of numerical intensity and amounts of data. I doubt very much that there would ever be a one-size-fits-all approach.

However in the context of the OP, I'm arguing that Go would be preferable to Python for the purpose of writing bioinformatics models, and certainly more suitable than Lua or JavaScript.

Of course Python can sometimes be very performant if you leverage NumPy/SciPy, since those are ultimately bindings into the FORTRAN numeric computing libraries of yore. But if we're talking about writing the inner loop, and the choices are Go, Python, Lua, and JavaScript, I think Go is going to win that on the performance and interoperability fronts handily (I omit Crystal, as I am not familiar with it).


Even in the context of bioinformatics my comment applies. With modern C++ libraries you are able to replicate the numpy user experience almost line by line. A baseline FASTQ parser in modern C++ would look nothing like the fastq.h C parser the author presented. Naive versions of sequence alignment algorithms like Needleman-Wunsch are easily implemented in C++ aswell and you can even do most of your development in a Jupyter notebook with cling/xeus.


I'll take your word for it; I haven't worked in that field.

I do still think it would interesting to see a comparative benchmark though. I know the Go compiler tries to use AVX and friends where available. I doubt it will ever beat a competent programmer using OpenMP to vectorize though (though Goroutnes might be competitive for plain multithreading).

A relevant consideration too -- OpenMP seems to be moving in the direction of supporting various kinds of accelerators in addition to CPUs*, so your C++ code has a better change of being performance-portable to an accelerator if you need it.


Note the go compiler is actually fairly immature compared to C++ compilers. It does not have any AVX autovectorization. Any AVX optimization is manual assembly by a library author.


Is go really high level? Imho the type system is so botched you cannot really say that.

One cannot even write a generic `reduce` function...


Someone has a PR merged already for Go. The benchmark numbers need updating


Performance in Go? Far less than what you would get from Rust.


I would expect Rust to be faster. I never claimed Go would be faster than Rust, but that it would be faster than Python.

I think for rust, the barrier of entry is high in that it is a difficult language to learn. Admittedly, I don't know rust, but people I know who do have said as much.

I think Go strikes a good balance of being easy to learn and use, having a rich standard library, and also being performant.


I agree. As someone who writes Go professionally, you can learn the entire language in a weekend.

I have sat down to learn Rust several times investing many hours and still wouldn’t say I’m in any way competent at it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: