Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I vaguely remember about some benchmarking project that deliberately randomised these compiler decisions, so that they could give you more stable estimates of how well your code actually performed, and not just how well you won or lost the linker lottery.


You're probably thinking of "Performance Matters" by Emery Berger, a Strange Loops talk. https://youtube.com/watch?v=r-TLSBdHe1A


There was Stabilizer [1] which did this, although it is no longer maintained and doesn't work with modern versions of LLVM. I think there is something more current now that automates this, but can't remember what it's called.

[1] https://emeryberger.com/research/stabilizer/


The Coz profiler from Emery Berger.

It can actually go a step further and give you decent estimate of what functions you need to change to have the desired latency/throughput increases.


Thanks, I was trying to remember that one!


LLD has a new option "--randomize-section-padding" for this purpose: https://github.com/llvm/llvm-project/pull/117653


Interesting, thanks!


"Producing wrong data without doing anything obviously wrong!"

https://doi.org/10.1145/1508244.1508275


"Producing wrong data without doing anything obviously wrong!"

[pdf]

https://users.cs.northwestern.edu/~robby/courses/322-2013-sp...


As already mentioned this is likely Emery Berger’s project with the idea of intentionally slowing down different parts of the program, also to find out which parts are most sensitive to slowdowns (aka have the biggest effect on overall performance), with the assumption that these are also the parts that profit the most from optimisations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: