Everytime I read a HN comment claiming a language to be „blazingly“ fast, i wish they posted a link to statistically sound benchmarks, including vm-warmup, GC collection times, etc.
Otherwise I just throw these adjectives away. I argue a new compiler/interpreter will always lose against the JVM, which has thousands of man
hours of optimization built-in.
The JVM in turn will always lose against a clever memory-conscious lowlevel-implementation in Rust or C or assembler.
Please don‘t advertise speed without any studies or comparison to back up that claim.
We compile using LLVM, which has had many many man years put into it. Our GC is bdwgc, which while generic and conservative has also had a lot of optimization put into it and it works very well.
We don't pretend to be a mature language with even so much as predictable performance characteristics but the "blazing fast" statement is there to indicate we're very much closer to C performance than even go is typically.
Everytime I read such comment in HN, I smile and remember the days C programms on 8 bit micros and later MS-DOS, were made of 80% inline Assembly statements, because the compilers were quite lousy.
Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).
It took 40 years of optimization research and clever use of UB defined in the standard, for C compilers to achieve the code quality generation they have nowadys.
> Fran Allen is the opinion that the adoption of C delayed the field of compiler optimizations research back to pre-history (Coders at Work).
Yes! The entire book is wonderful, but as a compiler writer myself, Fran's interview really stuck with me.
The relevant passage, for the curious:
———
Seibel: When do you think was the last time that you programmed?
Allen: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.
The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language. So that was the excuse for C.
Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?
Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.
By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are …[sic] basically not taught much anymore in the colleges and universities.
Seibel: Surely there are still courses on building a compiler?
Allen: Not in lots of schools. It's shocking. There are still conferences going on, and people doing good algorithms, good work, but the payoff for that is, in my opinion, quite minimal. Because languages like C totally overspecify the solution of problems. Those kinds of languages are what is destroying computer science as a study.
———
(pp. 501-502)
I recommend that any programmer who hasn't read this book give it a read. In fact, I think I might give it another read this week :)
Otherwise I just throw these adjectives away. I argue a new compiler/interpreter will always lose against the JVM, which has thousands of man hours of optimization built-in.
The JVM in turn will always lose against a clever memory-conscious lowlevel-implementation in Rust or C or assembler.
Please don‘t advertise speed without any studies or comparison to back up that claim.