Hacker News new | past | comments | ask | show | jobs | submit login

This doesn't really pass the small test for me either, but to play devils advocate:

Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.




Maybe you will find the proof that the infinite series 0.9999... exactly equals 1 interesting:

https://en.wikipedia.org/wiki/0.999...


Wow, can't believe I've never realised this. How counterintuitive.

The 1/3 * 3 argument, I found the most intuitive.


It's a flawed psychological argument though, because it hinges on accepting that 0.333...=1/3, for which the proof is the same as for 0.999...=1. People have less of a problem with 1/3 so they gloss over this - for some reason, nobody ever says "but there is always a ...3 missing to 1/3" or something.


The problem is that there are two different ways to write the same number in infinite decimals notation. (0.999... and 1.000...).

Thats what's counter intuitive to people, it's not an issue with 1/3. That has just one way to write it as decimals, 0.333...


Another intuition:

All the decimals that recur are fractions with a denominator of 9.

E.g. 0.1111.... is 1/9

0.7777.... is 7/9

It therefore stands to reason that 0.99999.... is 9/9, which is 1


That's a good one! Might replace my current favorite which is:

Let x = 0.99...

Then 10*x = 9.99...

And if we subtract x from both sides, we get:

10x - x = 9.99... - x

And since we already defined x=0.99... when we subtract it from 9.99..., we get

9x = 9

So we can finally divide both sides by 9:

x = 1


I like the argument that observes "if you subtract 0.99(9) from 1, you get a number in which every decimal place is zero".

The geometric series proof is less fun but more straightforward.

As a fun side note, the geometric series proof will also tell you that the sum of every nonnegative power of 2 works out to -1, and this is in fact how we represent -1 in computers.


How can the sum of a bunch of positive powers powers of 2 be a negative number?

Isn't the sum of any infinite series of positive numbers infinity?


https://youtu.be/krtf-v19TJg?si=Tpa3EW88Z__wfOQy&t=75

You can 'represent' the process of summing an infinite number of positive powers of x as a formula. That formula corresponds 1:1 to the process only for -1 < x < 1. However, when you plug 2 into that formula you essentially jump past the discontinuity at x = 1 and land on a finite value of -1. This 'makes sense' and is useful in certain applications.


The infinite sum of powers of 2 indeed diverges in the real numbers. However, in the 2-adic numbers, it does actually equal -1.

https://en.wikipedia.org/wiki/P-adic_number


Eh, P-adic numbers basically write the digits backwards, so "-1" has very little relation to a normal -1.


Any ring automatically gains all integers as meaningful symbols because there is exactly one ring homomorphism from Z to the ring.


-1 means that when you add 1, you get 0. And the 2-adic number …11111 has this property.


\1 is a good question that deserves an answer.

\2 is "not always" ..

Consider SumOf 1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 ...

an infinite sequence of continuously decreasing numbers, the more you add the smaller the quantity added becomes.

It appears to approach but never reach some finite limit.

Unless, of course, by "Number" you mean "whole integer" | counting number, etc.

It's important to nail down those definitions.


> \1 is a good question that deserves an answer.

The same argument I mentioned above, that subtracting 0.99999... from 1 will give you a number that is equal to zero, will also tell you that binary ...11111 or decimal ...999999 is equal to negative one. If you add one to the value, you will get a number that is equal to zero.

You might object that there is an infinite carry bit, but in that case you should also object that there is an infinitesimal residual when you subtract 0.9999... from 1.

It works for everything, not just -1. The infinite bit pattern ...(01)010101 is, according to the geometric series formula, equal to -1/3 [1 + 4 + 16 + 64 + ... = 1 / (1-4)]. What happens if you multiply it by 3?

        ...0101010101
      x            11
    -------------------
        ...0101010101
     + ...01010101010
    -------------------
       ...11111111111
You get -1.


But if you look at limits you get "0" and "diverges".

And decimal "...999999" is an infinity, which should immediately set off red flags and tell you that you need to be extra careful when analyzing it.

In computers your series of 1s is not infinite, there's a modulus that steps in. And this analysis depends on the modulus being an exact power of the base. But you could make a system that's decimal but has a modulus of 999853, for example, and then "-1" would be 999852.


> In computers your series of 1s is not infinite, there's a modulus that steps in. And this analysis depends on the modulus being an exact power of the base.

That isn't quite correct. The series of 1s really is conceptually infinite. That's why we have sign extension. The analysis (of the sum of all natural powers of 2) will work for any modulus that is an integral power of 2, including a modulus where the integer to which 2 is raised is infinitely large. Such an infinite modulus will still be evenly divided by a finite power of 2 -- as well as by itself -- and so it will disappear whenever you're working in any finite modulus that is a power of 2 -- or when you are working modulo 2^ℕ. The modulus of 2^ℕ will prevent any distinct finite integers from falling into the same equivalence class.

This is what enables you to have an infinite series of leading 1s, or leading patterns, without problems.


In an introductory course to String Theory they tried to tell me that 1+2+3+4+... = -1/12.

There is some weird appeal to the Zeta function which implies this result and apparently even has some use in String Theory, but I cannot say I was ever convinced. I then dropped the class. (Not the only thing that I couldn't wrap my head around, though.)


The result isn't owed to the zeta function. For example, Ramanujan derived it by relating the series to the product of two infinite polynomials, (1 - x + x² - x³ + ...) × (1 - x + x² - x³ + ...). (Ok, it's the square of one infinite polynomial.)

Do that multiplication and you'll find the result is (1 - 2x + 3x² - 4x⁴ + ...). So the sum of the sequence of coefficients {1, -2, 3, -4, ...} is taken to be the square of the sum of the sequence {1, -1, 1, -1, ...} (because the polynomial associated with the first sequence is the square of the polynomial associated with the second sequence), and the sum of the all-positive sequence {1, 2, 3, 4, ...} is calculated by a simpler algebraic relationship to the half-negative sequence {1, -2, 3, -4, ...}.

The zeta function is just a piece of evidence that the derivation of the value is correct in a sense - at the point where the zeta function would be defined by the infinite sum 1 + 2 + 3 + ..., to the extent that it is possible to assign a value to the zeta function at that point, the value must be -1/12.

https://www.youtube.com/watch?v=jcKRGpMiVTw is a youtube video (Mathologer) which goes over this material fairly carefully.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: