Just time. We were already two weeks late on Round 8 due to a host of other issues. We'd like to do one round per month if we can get our routine ironed out.
We'll make it a priority to get them run in Round 9.
Please don't be annoyed by my following comment as I appreciate your effort and like the benchmarks very much (as a hacker - as project leader I'd prefer the "enterprise frameworks" to perform much better :-) ): You should really avoid publishing incomplete benchmarks. They don't do justice to both, the left out and the included.
> You should really avoid publishing incomplete benchmarks.
If we had followed that advice, there never would have been a round 1. I was so uncomfortable with the idea that we'd be publishing surely-flawed benchmarks of frameworks we didn't really understand that I requested to be taken off the project (prior to round 1). It was only after seeing the post-round-1 discussions and the flood of pull requests that I realized I was wrong.
These benchmarks are always going to have flaws. I think it is better for us to regularly publish our best attempt than to try for perfection.
We'll make it a priority to get them run in Round 9.