Hacker News new | past | comments | ask | show | jobs | submit login

A better way to get good performance is to thread your switch statement, which is hard to do explicitly in Rust last time I tried (maybe you could do this if you mark functions as inlinable?).



With current branch predictors threaded code might not make a big difference like it used to.

https://hal.inria.fr/hal-01100647/document


What do you mean by "thread your switch statement"?


The "big switch statement" approach is for each bytecode instruction to complete by jumping to a centralized dispatch location (i.e. the switch statement).

The "threaded" approach is for each bytecode instruction to complete by decoding and jumping to the handler for the next instruction.

Basically instead of "break" you have `goto handlers[nextIp->opcode].`

The advantages of threading are fewer jumps and better branch prediction (since branch prediction is tied to IP). The disadvantages are slightly larger code and compilers struggle to optimize it, since the control flow is not structured.


This method of design is called a Continuation Passing Style Interpreter. [1]

Here's a production version from OpenJ9's JVM ByteCode Interpreter. [2]

[1] https://kseo.github.io/posts/2017-01-09-continuation-passing...

[2] https://github.com/eclipse/openj9/blob/01be53f659a8190959c16...



I think he means have a different thread for each case or a subset of cases instead of having any explicit switch statement.


Looks like what emulator are using.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: