no I dont think so - TinyStories has a different architecture and assumptions for success.. different enough so that they dont compare to the others IMHO
People run conferences through openreview. You sign up as an organizer, people submit papers, and reviewers review them. Then a final decision is made about acceptance.
As a non-expert, I think that openreview is basically useless for you. The signal to noise ratio is terrible. It's just like arxiv.
Just because a paper got good reviews, doesn't make it a good paper. And just because it got bad reviews, doesn't mean it's a bad paper. Conference reviews are pretty random. A lot of good work is rejected and a mountain of trash is accepted.
At the moment we're massively short on competent reviewers, so the level of commentary of openreview is.. "oh look, my racist uncle commented on cnn.com again"-level abysmal.
+1 to this. I've been orienting away from ML conferences because the review process is so bad; it's an incredible amount of wasted work responding to and re-submitting things after getting a bad crop of reviewers. Honestly it also makes me despair for the quality of the conferences themselves.
For my group, there's journals and a whole ecosystem outside of the ml conference space related to the actual application area... We can go to a top-tier journal in the application domain and get better reviewers.
This is a popular site used by various conferences for peer review. Useful for getting expert opinion on papers. Probably too noisy for someone not involved in research.
It’s also useful for getting hot takes by reviewers who didn’t read the paper carefully. Though even in those cases sometimes the author responses can be pretty illuminating
TinyStories: https://arxiv.org/abs/2305.07759
Phi-1.5: https://arxiv.org/abs/2309.05463
Both these papers proved that with good/concise data we can get better performance with smaller parameter counts.