It's true that humans hadn't discovered zero until relatively recently in mathematical history. Should we reject these "new" developments in numeracy, and go back to roman numerals? One could make a pretty strong case that roman numerals are more intuitive, more human.
That's because making the switch from 1-indexing to 0-indexing is extremely counter-intuitive. However, take a hypothetical person who learned 0-indexing socially from birth, I'd imagine they'd be just as confused by 1-indexing.
My point was that it's rather awkward to count from 1 when doing programming.
For instance, could you rewrite this function where int count = 1; and have it still be intuitive?
int count_occurrences(std::vector<int> vec, int value) {
int count = 0;
for(int i = 0; i != vec.size(); ++i) { // using 0-based index
if(vec[i] == value)
++count;
}
return count;
}
My attempt is: (look how ugly!)
int count_occurrences(std::vector<int> vec, int value) {
if(vec.empty())
return 0;
count = 1;
for(int i = 2; i <= vec.size(); ++i) { // using 1-based index
if(vec[i] == value)
++count;
}
return count;
}
As another commenter pointed out, it's only the indices that are one-based. I think it would look something like this:
int count_occurrences(std::vector<int> vec, int value) {
int count = 0;
for(int i = 1; i <= vec.size(); ++i) { // using 1-based index
if(vec[i] == value)
++count;
}
return count;
}
Off the top of my head, this has a number of benefits in higher level languages. For instance in Javascript, here are some common inconveniences caused by zero-based indices:
var lastEl = arr[arr.length - 1];
if(arr.indexOf(someEl) !== -1) {
// do something knowing that someEl is in the array
}
and if we lived in a one-based world, here's what they would look like:
var lastEl = arr[arr.length];
if(arr.indexOf(someEl)) {
// do something knowing that someEl is in the array
}