The biggest blocker for me is usually working out how I can implement a given idea without either writing a bunch of code (I'll get bored) or trying to verify if the paper even works for my use case without doing the aforementioned. One field I pay attention to with this problem seems to be abstract interpretation, with my background at least, the methods are very theoretically clean and impressive but actually implementing these as code and knowing how to write good implementations seems to be quite obtuse.
I genuinely don't understand why we allow papers to be published on computer science, with graphs plotted of the supposed efficacy of research (i.e. not just a theoretical paper), with no code attached.
To assess MADDNESS’s effectiveness, we implemented both it and existing algorithms in C++ and Python. All of our code and raw numerical results are publicly available at https://smarturl.it/Maddness. All experiments use a single thread on a Macbook Pro with a 2.6GHz Intel Core i7-4960HQ processor.
I was trying to be clear that I was referring to papers which are talking about code they wrote. If you wrote (say) a program to predict the throughput of machine code, then I want to be able to reproduce the results you claim - thats a real example, no hint of any source yet and I've been looking.
If we can't reproduce it isn't really science. I know academics often write bad code and don't like to publish their dirty work, but the buck has to stop somewhere.
I genuinely don't understand why we allow papers to be published on computer science, with graphs plotted of the supposed efficacy of research (i.e. not just a theoretical paper), with no code attached.