Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My counter argument to this would be that it is as expensive to compare LE k[4]s with each other as it is BE k[0]s.

As long as you deal with fixed length chunks of data accessing it from either end should be equal effort (in first approximation[1]).

This is qualitatively different from the odd/even case, because for a number of unknown length you can tell odd/even in O(1) for LE but need O(n) only for BE (you have to find the LSB in n steps).

Mathematically there is more information you get from just having the LSBs than just having the MSBs without knowing the whole number and its length. I think this the only reason, why LE is marginally better, everything else boils down to convention.

[1] I know that on modern architectures it can be faster to read memory upwards than downwards, because of the pre-fetcher, but this is what I meant with the advantage is because of convention. If we had a symmetric pre-fetcher the point would be moot.



True. There is a significant asymmetry, though, in that you are more likely to be in a situation where you know the starting address of an object and a minimum size than you are to be in a situation where you know the end address of an object and a minimum size. Strictly speaking that's also an arbitrary convention (as I guess the address of a struct could be defined as the address of its last byte), but it's a near-universal one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: