Hacker News new | past | comments | ask | show | jobs | submit login

> a tiny interpreter that sits in L1 and takes its instructions in a very compact form ends up saving you more memory bandwidth

There's a paper on this you might like. https://www.researchgate.net/publication/2749121_When_are_By...

I think there's something to the idea of keeping the program in the instruction cache by deliberately executing parts of it via interpreted bytecode. There should be an optimum around zero instruction cache misses, either from keeping everything resident, or from deliberately paging instructions in and out as control flow in the program changes which parts are live.

There are complicated tradeoffs between code specialisation and size. Translating some back and forth between machine code and bytecode adds another dimension to that.

I fear it's either the domain of extremely specialised handwritten code - luajit's interpreter is the canonical example - of the the sufficiently smart compiler. In this case a very smart compiler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: