I imagine zero based indexing is because resources were scarce and number of bits low in the bad old days. Why waste that zero index. From there is was backwards compatibility.
If machine code uses zero then assembly inherits that, them C and everything else. Even JS months!
No, zero based indexing just is based on how arrays actually work. It has nothing to do with resources being scarce. You have an address for the beginning of the array and an offset. The first element has no offset so you use zero. Therefore you naturally end up with zero indexing.
It does make some sense: if I have some apples, and I ask a child to count them, they'll address each one starting with the number 1, not the number 0.
Yes, indeed. But when indexing to an element in an array you aren't measuring from a zero (well, in memory you are, but not conceptually). You're conceptually pointing at an apple.
I think maybe it's just hard to unlearn the offset mentality. Of course you can convert a direct index mentality to an offset mentality, but I don't think anyone is pointing at the first item in a row of items and thinking "that's a zero offset from the start of the number of items". They're thinking, "That's item number 1".
Yeah that's because the distinction between indexing and counting is pretty much irrelevant to every day use, and to maths where you can just hand-wave syntax. So people are very used to doing it "wrong".
That's what most of the issue with this debate is. Indexing from 1 is wrong, but people are soooo used to it they just can't get over what they think is the "normal" way.
If machine code uses zero then assembly inherits that, them C and everything else. Even JS months!