This doesn't really pass the small test for me either, but to play devils advocate:
Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.
It's a flawed psychological argument though, because it hinges on accepting that 0.333...=1/3, for which the proof is the same as for 0.999...=1. People have less of a problem with 1/3 so they gloss over this - for some reason, nobody ever says "but there is always a ...3 missing to 1/3" or something.
I like the argument that observes "if you subtract 0.99(9) from 1, you get a number in which every decimal place is zero".
The geometric series proof is less fun but more straightforward.
As a fun side note, the geometric series proof will also tell you that the sum of every nonnegative power of 2 works out to -1, and this is in fact how we represent -1 in computers.
You can 'represent' the process of summing an infinite number of positive powers of x as a formula. That formula corresponds 1:1 to the process only for -1 < x < 1. However, when you plug 2 into that formula you essentially jump past the discontinuity at x = 1 and land on a finite value of -1. This 'makes sense' and is useful in certain applications.
The same argument I mentioned above, that subtracting 0.99999... from 1 will give you a number that is equal to zero, will also tell you that binary ...11111 or decimal ...999999 is equal to negative one. If you add one to the value, you will get a number that is equal to zero.
You might object that there is an infinite carry bit, but in that case you should also object that there is an infinitesimal residual when you subtract 0.9999... from 1.
It works for everything, not just -1. The infinite bit pattern ...(01)010101 is, according to the geometric series formula, equal to -1/3 [1 + 4 + 16 + 64 + ... = 1 / (1-4)]. What happens if you multiply it by 3?
...0101010101
x 11
-------------------
...0101010101
+ ...01010101010
-------------------
...11111111111
But if you look at limits you get "0" and "diverges".
And decimal "...999999" is an infinity, which should immediately set off red flags and tell you that you need to be extra careful when analyzing it.
In computers your series of 1s is not infinite, there's a modulus that steps in. And this analysis depends on the modulus being an exact power of the base. But you could make a system that's decimal but has a modulus of 999853, for example, and then "-1" would be 999852.
> In computers your series of 1s is not infinite, there's a modulus that steps in. And this analysis depends on the modulus being an exact power of the base.
That isn't quite correct. The series of 1s really is conceptually infinite. That's why we have sign extension. The analysis (of the sum of all natural powers of 2) will work for any modulus that is an integral power of 2, including a modulus where the integer to which 2 is raised is infinitely large. Such an infinite modulus will still be evenly divided by a finite power of 2 -- as well as by itself -- and so it will disappear whenever you're working in any finite modulus that is a power of 2 -- or when you are working modulo 2^ℕ. The modulus of 2^ℕ will prevent any distinct finite integers from falling into the same equivalence class.
This is what enables you to have an infinite series of leading 1s, or leading patterns, without problems.
In an introductory course to String Theory they tried to tell me that 1+2+3+4+... = -1/12.
There is some weird appeal to the Zeta function which implies this result and apparently even has some use in String Theory, but I cannot say I was ever convinced. I then dropped the class. (Not the only thing that I couldn't wrap my head around, though.)
The result isn't owed to the zeta function. For example, Ramanujan derived it by relating the series to the product of two infinite polynomials, (1 - x + x² - x³ + ...) × (1 - x + x² - x³ + ...). (Ok, it's the square of one infinite polynomial.)
Do that multiplication and you'll find the result is (1 - 2x + 3x² - 4x⁴ + ...). So the sum of the sequence of coefficients {1, -2, 3, -4, ...} is taken to be the square of the sum of the sequence {1, -1, 1, -1, ...} (because the polynomial associated with the first sequence is the square of the polynomial associated with the second sequence), and the sum of the all-positive sequence {1, 2, 3, 4, ...} is calculated by a simpler algebraic relationship to the half-negative sequence {1, -2, 3, -4, ...}.
The zeta function is just a piece of evidence that the derivation of the value is correct in a sense - at the point where the zeta function would be defined by the infinite sum 1 + 2 + 3 + ..., to the extent that it is possible to assign a value to the zeta function at that point, the value must be -1/12.
Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.