Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What’s wrong with that? Also there’s has to be a cap somewhere in the process.


> What’s wrong with that?

It’s using a ruler to aim an electron microscope.

Standardised tests measure academic potential, i.e. probability of graduating and likely value added to society from the educational experience, with a fair amount of error. Taking everyone within a standard deviation of the cut-off and randomly selecting therefrom admits this error and removes the stigma from those just below the previous, false cut-off. If one really wanted, and if the statistics merited it, one could use a weighted randomisation, with individuals scoring higher getting an advantage compared to those scoring lower.


> randomly selecting therefrom admits this error

Well, it enlarges the error. All you have done is to add an additional source of noise on top of the existing one.

> and removes the stigma from those just below the previous, false cut-off

This it may do, it may well be psychologically easier to accept that you got unlucky on the coin flip rather than unlucky on which topic came up in question 13.

(Not sure what "false" means, the cut-off is a fact of how the system works.)


False as in we don’t know the person ranked 1,001 has less academic potential than the person ranked 999.

We know, with a decent likelihood, that the person ranked 1 has more than the person ranked 750. But someone at 1,250 is quite likely to exceed the academic potential of someone at 900. Hence the “falseness” of the cut-off.


Sure, but this is like insisting that every guy with Olympic gold holds a "false" medal, every time. While the stated rule is that it goes by who crosses the line first on the day, this obviously isn't a perfect measure of who's ultimately really in some other abstract sense the fastest man alive.

I guess the common way of deliberately adding noise is by rounding off scores, such as giving A/B/C instead of rank order. Which means that nobody gets a letter saying that their score (the one from the exam) has rank 1001.


> this is like insisting that every guy with Olympic gold holds a "false" medal, every time

Bad comparison. We have loads of statistics from sports showing there are highly-precise rankings. Run certain races repeatedly and you’ll find persistent rankings. Particularly at the most-competitive levels, where innate biology dominates in most sports.

Have a cohort of students re-take a standardised test a few times, on the other hand, and you’ll get a spread. Try to relate that spread to the things you’re actually trying to measure, academic potential, and it’s a hair better than a crap shoot.


Maybe. I'd hazard a guess that membership of the top 1000 places in a high-stakes national exam might actually be less noisy than a gold medal. That is, on a re-run, what percentage of people keep their top-1000 scores, and (on re-runs of the last 40 years olympics, times 100 individual events, say) what percentage of gold medalists would have kept theirs?

But I have not checked the numbers. Obviously if you restrict attention to those with exam scores near the boundary, you can get different results. And there are indeed other sports scores more precise than the one I mentioned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: