Hacker News new | past | comments | ask | show | jobs | submit login

This may be more fair for the "simple" case in Nim.

    import tables, strutils
    var counts = initCountTable[string](16384)
    for line in stdin.lines:
      for word in line.toLowerASCII.split:
        counts.inc word
    counts.sort            # SortOrder.Ascending to reverse
    for word, count in counts:
      echo word, " ", count
I compiled with `nim c -d:danger --gc:orc --panics:on` with gcc-10.2 on Linux-5.11 with nim-devel and file in /dev/shm. Runs in about 1.97x the time of "wc -w" for me (.454s vs .2309).

If we apply "BS-scaling" to the article table that would be 2.27*.2 = 0.454 sec on the author's machine which would make it twice as fast as the "simple C". Yet, if I actually run the simple C, I get 0.651 seconds, so only 651/454=1.43x "simple C" speed. This mismatch is, again, bigger than Go vs. RustB (0.38/0.28 = 1.35x). My only point is that you cannot just apply BS scaling and these results may very well fail to generalize across environments.

For what it's worth, literally every time I re-do anything in Rust in Nim, the Nim is faster. But benchmarks are like opinions...everyone has them and you should very much form your own, not delegate to "reputation". The best benchmark is actual application code. I make no positive general claims, but say this only because Rust people so often do. { EDIT: I think it is generally a big mistake to make many assumptions about prog.lang performance, especially if the assuming is on the "must be fast" side. }




Thanks! That's a much cleaner implementation, and I learned some cool nim. I got a note on GitHub with a similar implementation, so I went with theirs.

And, cool, "BS-scaling" is my new term of art :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: