I agree. The function (like those in the above benchmark) that does not have side-effects and returns nothing, is basically doing nothing as far as compiler is concerned, so it will be optimized away (not run at all). Modern compilers are incredibly good at that kind of stuff.
So what OP should do, is make function return something, and then make sure that that 'something' somehow survives the test. For example, if that 'something' is a number, add it to a global variable that survives the benchmark and print that global variable at the end of the benchmark. If it is a string, maybe add the hashcode to global variable, or maybe string length. Anyways, that will force compiler to emit the code that executes the function, and the benchmark will actually execute the function.
I don't think that would go far enough. Consider: even if all the functions returned something, a clever compiler might inline the hot function calls, and might then see that most of the argument checks are unnecessary, and eliminate them. Which would mean it's gotten rid of precisely the code that's being benchmarked!
In short, writing JS microbenchmarks that actually test what they claim to test is quite hard.
So what OP should do, is make function return something, and then make sure that that 'something' somehow survives the test. For example, if that 'something' is a number, add it to a global variable that survives the benchmark and print that global variable at the end of the benchmark. If it is a string, maybe add the hashcode to global variable, or maybe string length. Anyways, that will force compiler to emit the code that executes the function, and the benchmark will actually execute the function.