Hacker News new | past | comments | ask | show | jobs | submit login

Readers should definitely evaluate the complexity of the code necessary to implement the tests. To that end, I plan to eventually enhance the results web site to allow you to readily view the related source for each test implementation (perhaps using an iframe pointing at GitHub).

For the very first round, we included the relevant code snippets in the blog entry, and I think that added a lot to the context. With nearly 100 frameworks, the volume of code has become too large to simply embed all of that code directly into the blog entry, but we've put too much burden on the reader to sift through the GitHub repo to compare code complexity.

Things we aim to do:

* Pop-up iframe with relevant code from GitHub.

* Use a source lines of code (sloc) counter to render sloc alongside each result in the charts.

* Render the number of GitHub commits each test implementation has seen at our repository. At the very least, this would show whether a test has seen a lot of review.

* Introduce more complex test types [1].

And as the other reply has mentioned, we have also discussed the possibility of a larger test type that might include a multi-step process. I'd love to eventually get to that point.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...




a matrix rank calculator might be helpful as well to filters results for users, though I am impressed with the present and new(ish) filtering available.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: