Hacker News new | past | comments | ask | show | jobs | submit login

You could run the exact same computation and still do something useful. The inputs do not have to be identical, but they can be coordinated (or random). Think of running a brute force search on some useful problem.



Inputs definitely need to be identical because different inputs may lead to different behaviours in branch predictors and memory access patterns, affecting the score.

The impact may be small, but I see no reason why the impact should be there in the first place just to satisfy some mathematicians' curiosity about special numbers.


Sorry, I meant running different inputs on a program that does not branch or have an input-dependent memory access pattern. Any computation can be written in such a form, although there might be a large overhead compared to programs that do branch.


There are certain problems for which different inputs do not require different amounts of computer. "Add one to a number between 128 and 256" is an obvious example. The question is whether there are useful problems with that property.


Even such a task is subject to very specific requirements, because adding a number between 128 and 256 may be enough to take another path in the microcode depending on if the result overflows or not. I'm not saying this happens kn practice, but I wouldn't be surprised if a future generation of processor would do this and make the entire benchmark invalid from that point on.

A more likely scenario for such independent instructions would operate on an entire bit string, like boolean operators and vector instructions. I think you'd have a tough time producing any useful output from such an algorithm, though, because you wouldn't be able to do much with conditionals to keep the branch predictor score fair.

I don't think that there are any algorithms that could operate within a generic benchmark that could have random elements in them _and_ product a useful result. Either the calculations are different and fair but meaningless, or they're exactly the same with the same result.


> I don't think that there are any algorithms that could operate within a generic benchmark that could have random elements in them _and_ product a useful result.

One that springs to mind is Monte-Carlo sampled raytracing. An individual ray might take more or less time to compute, but the time to compute 10 million rays will be statistically be roughly constant. You could even imagine averaging a bunch of renders of the same scene from different machines to get a lower-noise result, thereby demonstrating a benchmark combined with useful work.

Statistical predictability is the key.

(Confession - this isn't exactly theoretical. I sometimes have occasion to render light fields, and I shard the work over as many random workstations as I can get my hands on. It's always obvious which workstations are faster than others, even without making any special attempt to balance the workload. I think this is actually a workable concept.)


We already have a existence proof for useful tasks for which changing inputs do not invalidate benchmark results - namely cryptography, where algorithms must run in constant time to avoid timing side-channel attacks. If your hardware takes a non-constant amount of time to add 8-bit integers, your hardware is broken and should receive a benchmark score of zero. The interesting question is how much overhead would be involved in turning scientific tasks like Mersenne prime search into benchmark-friendly workloads.


Wow. That makes it sound like even a perfectly deterministic calculation cannot be compared across machines. Some will be good at some, some good at others, and whether one machine is overall better than another depends on their intended uses.

Which now that I think about it, of course it would be like that. But still, what a headache for someone who just wants The Best One.


Even a simple addition of two integers can take a different amount of energy to perform depending on the values involved. This in turn affects the temperature of the chip, which can cause thermal throttling. Computer architecture is more complex than most software engineers realize.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: