Hacker News new | past | comments | ask | show | jobs | submit login
Mathematics: Came across the following gag. Is it true? (math.stackexchange.com)
68 points by bjourne on Oct 19, 2012 | hide | past | favorite | 80 comments



I'm guessing people here are familiar with Borges' "Library of Babel". (If not, it's a short story about a library that contains every possible book).

I read an interesting thought experiment about the concept:

1. There's lots of different languages around, but all languages (even Chinese) use a finite alphabet of symbols. So for simplicity we can just convert all languages in the Library into binary.

2. On first appearance, the library would contain books of an infinite number of lengths. But it would surely be more practical to split such books into volumes.

3. You'd have a maximum volume length of say, 1 million bits. Books shorter than that can just be padded with 0s. But you can now double up volumes: volume #6345 of book #1840849 might be an exact duplicate of volume #93974 of book #11132445. Which means the library can now be finite in size: it only needs to contain all one-million-bit strings, a mere 2^1000000 volumes.

4. Why stop there, though? It's clear we could just use smaller volumes and get a smaller library. In fact, the library only needs two pages - one '0' and one '1'. Depending on the order you read them, you can get any book you want out of it.

Information theory is weird.

Edit: here's the original: http://jubal.westnet.com/hyperdiscordia/universal_library.ht...


Not sure if we're on the same page here, but this does not conform to what we know about information theory.

First, as someone else mentioned we'd have to have a maximum book length or else your library contains an infinite number of books, which is simply absurd because there would, for instance, be a book that contained all of the finite books inside it, in alphabetical order.

Second, using your compression technique, we'd still need to have some numbering on the volumes, and to describe a book, we'd have to do so by a list of numbers describing which volumes comprise it and in what order.

Third, this makes clear what is happening in your reductio ad absurdum argument -- we can reduce the volumes down to one dash, one dot; but then each book needs to contain a listing in order of which "volume" is in which order. So we're just writing the boks in binary again.


> Second, using your compression technique, we'd still need to have some numbering on the volumes, and to describe a book, we'd have to do so by a list of numbers describing which volumes comprise it and in what order.

Yes, for a normal library this would be the case. But this library contains every possible book.

So the index list you're asking for, describing which volumes comprise what book in what order, is the complete list of all possible orderings of volumes up to the maximum book length.

You don't really need that list, because you know exactly what it looks like.

But then, you don't really need the volumes either, because you know what they look like too (being all combinations of N bits).

So yeah, you'd just need a 0 and a 1.

I already got a copy of that library, and the cool thing is that it doubles as a light switch.


There seems to be some assumption here that books are named by their ordering of volumes. But I might know the name of a book and not its contents, so I'd need to look up the contents. (Of course, this is silly if you have every possible book, because any other naming scheme means most names will be longer than the books themselves....)


or else your library contains an infinite number of books, which is simply absurd because there would, for instance, be a book that contained all of the finite books inside it, in alphabetical order.

Careful, that doesn't follow. The library could contain all finite books, but no infinite book.


Absolutely correct. That was really sloppy of me.


You are not accounting for meta information, such as which volumes go together in which order to form which book.


The point of a library containing every possible book is you don't need any meta information – if it has every possible book, it doesn't matter what order you put the volumes, or pages, or words together.

And that's why the library containing every possible book contains no information (think, for every correct maths textbook, there is one with every fact negated, one with every number replaced with the Unicode symbol for a cake, and very many that are wrong in all manner of nastily subtle ways.)


>>> the library containing every possible book contains no information

Slightly tangential, but this reminds me of an old saying I used to apply to both my paintings and my music. (and my critique of others) "Emphasis on everything is emphasis on nothing"

I think it goes along with that idea that if everything is there, there is nothing to discern it from anything else. And thus it becomes bland, and contains no real information (or emotion, as would apply to music/art). Leading me to another thought, if emotions can be encoded as information though music, is there a way to digitize this? (The easy answer being yes, hello 1s & 0s) But I think there is more thought in there to be had...

</stream of consciousness> sorry I got lost in there for a little...


Getting way OT here but....

That reminds me of how my mom watches American Idol. She watches all of the contestants, and can't decide on who to vote for, so she votes for all of them....


You are being intentionally vague for no good reason. Define library in a reasonable way and your claim is not true. Define library as an "uncatalogued lexicographically sorted collection" and your claim holds. A library is about selection and organization and metadata (a form of compression), not merely enumeration.


Borges's library doesn't actually contain every possible book, just every possible book of a certain length (it's finite but very large).

Fun fact about Quine -- he didn't actually write the first known self-printing program! http://en.wikipedia.org/wiki/Quine_(computing)


You can find every possible book longer than the default length split into several volumes.


Yes, but that's saying the same thing as "you can write all binary numbers with the digits 0 and 1".

These correspond to choosing a 'book length' of 1 bit.

Each million-bit book is just a number in a base 2^1,000,000

Numbers larger than that require more than one book, just as numbers larger than 1 require more than one binary digit.


You don't even have to assume the alphabet is finite, merely countable, and you still have exactly one book for each finite string of bits. In other words, the set of every finite sequence of integers (or natural numbers, rationals, finite strings of Chinese characters, etc.) is no larger than the proper subset of "every sequence of length one".

A library with uncountably many distinct volumes is a fun thought experiment, though, as is a text written in a language that uses an uncountable alphabet, wherein a single letter potentially contains an infinite amount of information...


> a text written in a language that uses an uncountable alphabet, wherein a single letter potentially contains an infinite amount of information...

Isn't that called "drawing" instead of "writing", though?

So, comic books? :)


Information theory already has the answer: the original library, of all 1 million bit strings, contains zero (or very close to zero) information. Paradoxically enough, it contains far less information than one of those 1 million bit strings selected at random.

A library is not just a collection of all possible patterns, is perhaps the real lesson here.


>4. Why stop there, though? It's clear we could just use smaller volumes and get a smaller library. In fact, the library only needs two pages - one '0' and one '1'. Depending on the order you read them, you can get any book you want out of it.

We have to stop there. I cannot simply replace my iTunes library with two text files containing a '0' and a '1'.


If your itunes library contained every possible audio file, then yes, you could.


http://yro.slashdot.org/story/01/03/17/1639250/illegal-prime...

This is from WAAAY back in the day.

"A person named Phil Carmody has found a very interesting prime number. When converted to hexadecimal, the result is a gzip that contains a DeCSS implementation. I've posted a short bit of Java here that takes the prime as a command line parameter and dumps the result to standard out if you want to test it."

...I actually ran it once upon a time, although I didn't check if the links were current.

While slightly different context (prime #'s vs. PI) the concept is the same. If you dig deep enough into prime numbers and it contains the "illegal" topic du-jour expressed as a zip file... well, I won't put it past PI containing almost everything else as well.


Ever since I was a kid in elementary school I've loved the idea/thought experiment of owning a hard disk full of every possible n-by-n binary image. By definition, on this hard disk you would have a sketch of every person you've ever met, every person you've ever loved, every place you've ever been, every alien in the solar system. On one hand, it's endearing you could depict so many things in such a finite medium, but at the same time it's astounding how much storage space you would need for even the smallest reasonable numbers of N. Sometimes when I'm bored waiting for a bus, I love to think about what it would be like to pour through all these images and somehow know which ones would be meaningful in my future that I don't yet know.


Sadly, there isn't enough energy in the entire Universe to cycle even a 16x16 black and white grid through all of its possible states.


Unless you happen to own a quantum computer.


This reminds me of the "pi encoding" - if pi is, in fact normal, you can encode any file as a tuple (`start`, `length`) encoding a substring of the expansion of pi.

It's often phrased as a compression paradox of a sort - surprisingly many people overlook the fact that `start` will tend to take a lot of bits to encode.


It's also quite slow...


Does this mean writing out a normal number is illegal? They encode state secrets, child pornography, pizza recipes, and entire computer simulations of universes where owning pizza recipes is considered high treason.


Only to the same extent that counting is illegal, because if you count high enough you'll eventually generate a numeric representation of every possible datum.


The law is about intent. Writing out a number isn't illegal, but writing out a number with the purpose of carrying out something that is illegal is.

To be fair, it is a concept that I still struggle with. My programmer mind is always stuck in "it is either true or it isn't" mode. Things that are only true sometimes do not make sense to me.


Yes. Recall the DeCSS controversery in USA, and the Wicket-Cricket wars in fiction's HHGTTG


Sounds similar to the http://en.wikipedia.org/wiki/Infinite_monkey_theorem ...

"a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare."


In the ideal world of mathematics there is no distinction between creating and discovering. Everything that is possible exists. That makes the infinite monkey theorem or statements like "the answers to all the great questions of the universe can be found in the digits of π" meaningless. For instance, every future masterpiece of literature already exist in mathematical sense, yet it will still take a genius to discover it in the infinite sequence of all possible texts.


Do you have any bibliography for this topic? I would be very grateful, I'm extremely interested in this.


Infinite and non-repeating does not imply it contains every number.

For proof of this, consider an infinite non-repeating number, without the number 9.

Pi is infinite and non-repeating. I have not yet seen proof that pi contains every finite number.


Assuming that π is normal and infinite (that's redundant but for clarity), then yes, it's true -- it contains every finite-sized number, every textual sequence, every secret, every book ever written. Scientific secrets we may never discover. Cures to nonexistent diseases. Everything!

But this is as meaningless as it is true, because the problem is not that those secrets are present, the problem lies in locating them.

I had this conversation years ago on Usenet -- the same question, the same answer. My correspondent seemed to think it was remarkable that all those riches lay within π. He didn't seem to realize that the same could be said of any normal, infinite sequence of digits, of which there may be an infinite number (not proven, but it's possible that e, √2, indeed the square root of any non-perfect-square number, all fall into this category).

If the grains of sand on a beach could be decoded as binary bits (or as digits in any base) by some accepted convention, and making the same assumptions about normality and infinity, then a beach also contains every secret, every book that has ever been written or will ever be written. But it's the same problem - those secrets cannot be located, distinguished from sequences we might label meaningless, simply because they're answers we're not interested in.

http://en.wikipedia.org/wiki/Normal_number

A quote: "While a general proof can be given that almost all numbers are normal (in the sense that the set of exceptions has Lebesgue measure zero), this proof is not constructive and only very few specific numbers have been shown to be normal. For example, it is widely believed that the numbers √2, π, and e are normal, but a proof remains elusive."


I don't have a strong mathematics background, but would I be right in thinking that the probability of a normal, infinite number containing a given arbitrary string of digits is "almost sure" rather than "sure"? Or can it actually be proven mathematically that it contains such a string?


> ... but would I be right in thinking that the probability of a normal, infinite number containing a given arbitrary string of digits is "almost sure" rather than "sure"?

No. Given the assumptions (a normal number, therefore infinite in extent), the probability that a given finite sequence appears somewhere in it is 1.0.

> Or can it actually be proven mathematically that it contains such a string?

Here's one way to think about it. Let's say that you flip a perfectly fair, random coin, heads counts as 1 and tails 0, and each flip adds a bit to a sequence. Let's further say that some finite bit sequence is chosen as a finite-subsequence target, let's say the text of "War and Peace" by Leo Tolstoy (a famously big book, 1,225 pages).

We can agree that, as the normal bit sequence increases in length, the probability of encountering "War and Peace" within the sequence increases as a strict function of the number of bits in the sequence.

Now the clincher -- what is true for a finite sequence (that the probability of encountering a given subsequence increases but never becomes certain) is not true for an infinite sequence. For an infinite sequence, the probability for the appearance of a given subsequence becomes certain, i.e 1.0. The only requirement is that the subsequence be finite.


Thanks for the explanation. While I have no doubt you're right (because I am completely outside my field of expertise), I have difficulty intuitively accepting the answer. By way of comparison, the wikipedia entry on the infinite monkey theorem [1] refers to the probability of producing the Shakespeare corpus as "almost sure", a situation that seems broadly analogous to the one you describe.

I suspect the difference is that the monkeys are required to produce an output which is merely random rather than evenly distributed.

[1] http://en.wikipedia.org/wiki/Infinite_monkey_theorem


> While I have no doubt you're right (because I am completely outside my field of expertise), I have difficulty intuitively accepting the answer.

When intuition fails while contemplating something involving infinity, I find it comforting to recall something John von Nuemann said: "Young man, in mathematics you don't understand things. You just get used to them".


> I suspect the difference is that the monkeys are required to produce an output which is merely random rather than evenly distributed.

Perhaps, or the fact that probability theorists regularly describe a probability of 1.0 as "almost certain", a wording that to me contradicts common sense:

http://en.wikipedia.org/wiki/Almost_surely


Of course it does. It's similarly unreasonable to call a Möbius Strip with a disc attached along the boundary circle a "plane"[1], the surface of Cartesian (x, y) coordinates a "line"[2], and numbers having counterintuitive special features, "normal".

[1] http://en.wikipedia.org/wiki/Real_projective_plane [2] http://en.wikipedia.org/wiki/Complex_line


> numbers having counterintuitive special features, "normal".

I don't think that 'normal' implies 'intuitive', even in normal usage.


Any technical field has to define more concepts than can be distinctly named by casual vocabulary using their plain England meaning, so some words will be jargon.

Don't pick on "almost sure". We also have group, ring, field, ideal, prime, countable, closed, ....


"almost sure", in probability, means an event with probability 1.0. I realize this is somewhat counter-intuitive from an English point of view, but it makes sense from a Math point of view.


I just looked into this, and to me it represents philosophy masquerading as logic. Reasonable people may differ.

http://en.wikipedia.org/wiki/Almost_surely

A quote: "In probability theory, one says that an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one."

This is like saying a dollar is approximately 100 pennies, or every woman is approximately pregnant (or the reverse).

Another quote: "Equivalently, we can say an event E happens almost surely if the probability of E not occurring is zero."

With probability theory being guided by notions like this, no wonder insurance is so expensive. To me, it is to probability theory what post-modernism is to literature -- it reduces everything to a matter of opinion.

In analysis, we have the notion of a limit, the idea that we can say something about a quantity that is only approached, never reached. Example:

lim x -> 0, 1/x = ∞

Technically this says "as x approaches zero, 1/x approaches infinity." But I can't tell you how many times I have heard this stated as "as x approaches zero, 1/x equals zero." But that's not so -- division by zero is undefined, so the problem as stated can never be described as anything but a limit, an approach.

The reverse logic applies to a probability of 1.0 -- 1.0 is not approximately anything, it's 1.0. It's not "almost one", it's "one". And if you subtract one from it, the result is not almost zero, it's zero.

> but it makes sense from a Math point of view.

Only if probability theory transcends the axioms of logic, and with the assumption that logic is the foundation on which mathematics is built.


You're confused about the meaning of almost surely, and your comparison with dollars and pregnancy is misleading.

The almost does not apply to the probability being one, but to the meaning of of P(S) = 1 for S a member of your sigma-algebra. So when one says that a real number is almost surely normal, it means P({a is normal; a is real}) = 1, but it does not mean that any real is normal. Indeed, there are infinitely many non normal numbers (for example, every integer is obviously not normal), even though the probability that a number is not normal is 0.

In the case of finite probability spaces, this usually does not happen, because for two finite sets A and B, #A != #B usually implies P(A) != P(B) [1], which is why your comparison with pennies/pregnancy is misleading.

[1] it is certainly possible to build probability spaces on finite sets where P(A) == P(B) and #A != #B.


> You're confused about the meaning of almost surely ...

Yes, but I'm not the only one. I think most people who read the justification for describing p = 1.0 as "almost surely" will have the same reaction, regardless of their training. Not to argue that logic must accept responsibility for our confusion.

> your comparison with dollars and pregnancy is misleading.

As to dollars, yes (because it's finite and trivial), but as to the parturient state of a set of women, it depends on the size of the set. For an infinite set of women, to describe them all as almost surely pregnant (or not), is a reasonable corollary to describing p = 1.0 as "almost surely" in the general case.

> Indeed, there are infinitely many non normal numbers (for example, every integer is obviously not normal), even though the probability that a number is not normal is 0.

This compares two infinite sets and draws a conclusion based on the outcome (the set of integers is infinite in size, and the set of numbers (reals and integers) is also infinite in size). I'm not sure this is comparable to making a statement about the mapping of a single well-defined probability (for example the appearance of War and Peace) into a single infinite set, like the presumed infinite set of the digits of Pi.

Which leads to an obvious question -- does the "almost surely" qualifier apply to all p = 1.0 probabilities, or only a subset? In other words, does the qualifier depend on the construction of the probability, or does it always apply?


To understand these issues, you should read up on measure theory: http://en.wikipedia.org/wiki/Measure_(mathematics)

pi is a single fixed number, so it doesn't help to talk about probabilities. Either every finite length sequence appears in pi, or at least one doesn't.

The technical term "almost surely" applies to all p=1 probabilities, even those where it's really certain that the event happens. This does not match with common understanding of "almost surely" because we'd not usually make a statement like "the decimal expansion of pi almost surely contains the number 4", but mathematically it's a correct statement.

It's important to keep in mind that this is technical terminology, so intuition does not always apply. It also means that it does not make sense to talk about women being almost surely pregnant without first defining a probability distribution. If we define each woman to be pregnant with a probability of 1%, independent of the pregnancy status of other women, and we have an infinite number of women, then almost surely at least one of them is pregnant. That is, the probability that at least one of them is pregnant is 1. But it's not certain that at least one of them is pregnant, it could be that all of them are not pregnant. Similarly, if we choose a number uniformly in the interval [0,1], then the probability of choosing 1/3 is 0, but it's possible that we chose 1/3. We cannot ignore that as an impossibility either, otherwise when we chose a number x, we could always claim "it's impossible that we chose exactly x!".


In probability theory, the statement "event S occurs almost surely" means by definition that probability of S occuring is 1. This is indeed confusing, but it's completely justified, given constraints.

When non-mathematicians talk about probability, the situation they usually have in mind is that they have n possible events that can occur, each is equally likely, for instance the fair dice throw. This leads to assigning each one of the events the same probability, in the case of dice it's 1/6. We also want to be able to assign the probability to the situation of some subset of events occurring, for instance the probability of the situation that number on a dice is even corresponds to a subset {2 was thrown, 4 was thrown, 6 was thrown}. By common sense, this should be 1/2, and generally the probability should be (number of events in the subset / number of all events). This approach is how probability is usually taught in high school.

The problem is that this approach, while useful, is totally inadequate approach for lots and lots of things that people care about. For instance, say we want to investigate the probability of earthquake occurring. We want to be able to answer the question "what's the probability that we will not have an earthquake in the next t seconds?". Assuming that earthquake are totally unpredictable, we should expect that if we wait s seconds, the answer to this question is still the same, so in probability terms, P(next earthquake will not occur in next t+s seconds provided it will not occur in the next s seconds) = P(next earthquake will not occur in next t). It's really not obvious how to apply the finite set of equally likely events method, if it's at all possible.

That's why most of the mathematicians use similar, but much more general approach. Just like before, we consider set of possible elementary events that can occur, but this time we don't assume it's finite, and we don't assign each one equal probability. Instead, we assign probabilities to subsets of events, so that it's subject to few simple rules, but otherwise in a completely arbitrary way. The rules are: probability of any of the events occurring is 1, and A_1, A_2, ... are pairwise disjoint sets of events, we assign to their union the sum of probabilities.

For instance, we want to model the archer shooting the circular target, so we let events be the points on the target (there are infinity of them), and we let the probability of subset of events be its area (assuming that whole target has area one). Now it makes sense to ask, say, what's the probability of archer hitting arrow within half of a radius from the center of the target? (it's 1/4). But, since every point has area 0, the probability of an archer hitting any particular point is 0, thus probability of archer _not_ hitting that point is 1, meaning that archer will _almost surely_ not hit that point. Almost, because while for every point the probability of archer hitting it is 0, archer will hit _some_ point. That's why mathematicians say almost surely, and not surely, when talking about evens with probability 1.


It's not proven, it's the definition. If there is a subsequence that isn't contained in a non-repeating infintely long sequence, then it isn't a non-repeating infinitely long sequence.


This is not true. For instance, sequence 00 does not occur in sequence

010110111011110111110111111...

which can hardly be described as "repeating" (or periodic) sequence.

The point here is normality. Fix alphabet A having b letters. We say that a sequence S of elements of A is normal if, for every word w, we have

lim n->inf C(S, w, n)/n = b^(-k)

where C(S, w, n) is the number of occurrences of w in first n letters of S, and k is length of w. More intuitively, S is normal if every word occurs in S with the same asymptotic frequency.

So, if a sequence does not contain some word as a substring, it cannot be normal by above definition.


You're conflating two things. The technical definition of "normal" is that it contains every finite string as a substring. That will imply that it's infinite and non-repeating.

However, you can have a string that is infinit and non-repeating but not normal.

Pi is infinite and non-repeating, but it is not known to be normal.


It is proven that there is an infinite number of normal numbers, as quoted in the wiki article you're mentioning (p({a real; a is normal}) = 1).


Yes, because among other things it's clearly implied by "While a general proof can be given that almost all numbers are normal ... " If true, then the set is infinite.


> But this is as meaningless as it is true

Well, it's a little more complicated than that.

You see, it means that there's the complete description of our universe in there, position of every atom, at every nanosecond. Somewhere in pi there's you living your life.

Also, every possible you.


Still meaningless, because of the insoluble problem of access. Within Pi there's a cure for cancer, written entirely in iambic pentameter, but that fact has no practical meaning because we can't find it.


Emphases on insoluble, as to find an item of interest, on average, would take longer than the history of the Universe till now, longer than human civilization or any encoding scheme is likely to be in use....


a. proove it

b. pi as a decimal number is an approximation that is only as long as you calculate it to be. Moreover, decimals do not contain non-numeric information. That's just your own interpretation. Imagine a number system where pi is 1. Clearly no names in 1. Now imagine a number system where pi is expressed as π, like... the number system that we use. Clearly no names in π.


> pi as a decimal number is an approximation that is only as long as you calculate it to be.

But Pi is not equal to our approximations of Pi. That's backwards. Pi owes nothing to our lame efforts to approximate it. When someone asserts that Pi may be an infinite sequence, the truth of the proposition doesn't depend on someone computing an infinite sequence of Pi's digits.

> Moreover, decimals do not contain non-numeric information.

Are you aware that your typing is promptly translated into numbers for transmission through the Internet? That A = 65 (in the old ASCII encoding), B = 66, etc.? So yes, decimals do contain non-numeric information if we choose to interpret them that way. And we do.


You're looking for strings in the decimal expansion of pi. Pi is not a decimal number! Every decimal representation of pi is an approximation. Pi is not an infinite sequence, it is a single value a bit higher than 3.14, it is only the decimal (or other bases, besides 10) expansion that has an infinite number of digits.


> Pi is not a decimal number!

Of course it is -- although choice of base is quire arbitrary. Pi is as much a decimal number as it is a binary number.

> Every decimal representation of pi is an approximation.

Our inability to represent Pi doesn't constrain Pi, it only constrains us. The fact that we cannot fully represent Pi has no effect on Pi or its identity. And Pi has the same identity in any base -- the ratio of a circle's circumference to its diameter.

> Pi is not an infinite sequence

Pi is an infinite sequence.

http://en.wikipedia.org/wiki/Pi#Infinite_series

Quote: "The calculation of π was revolutionized by the development of infinite series techniques in the 16th and 17th centuries. An infinite series is the sum of the terms of an infinite sequence.[49] Infinite series allowed mathematicians to compute π with much greater precision than Archimedes and others who used geometrical techniques."


You're constraining yourself to a wrong definition of what a number is. The number 0xA is not a different number from 0b1010 even though they have a different number of digits.

> Pi is not an infinite sequence

Pi is an infinite sequence.

I was talking about an infinite number of digits, but even this new definition is wrong. Pi is one single value. There are many ways to compute this value. Some of them include infinite series. Note the wording: "The calculation of π was revolutionized by the development of infinite series techniques". Other techniques were used before, and they are still valid. The only advantage of infinite series is that they are easier to calculate, and the approximations converge faster when you compute them on real (finite) hardware.


> You're constraining yourself to a wrong definition of what a number is.

That's your problem, not mine. You keep trying to say that Pi is defined by our approximations of its value.

> Pi is one single value.

So, tell me, which part of "Pi is an infinite sequence" are you actually disagreeing with?

> There are many ways to compute this value.

You persist in confusing Pi with our efforts to approximate its value.

> Other techniques were used before, and they are still valid.

False. 22/7 is not valid. 355/113 is not valid. This is valid:

https://www.dropbox.com/s/423ieio63bsuaat/pi_w.png

The reason? Pi is a mathematical idea that happens to have a numerical value, but the idea transcends the value. The value is a coincidence, which is why choosing to express it as 1 to the base Pi changes nothing, and why arguing about the size of its approximations changes nothing.

> The only advantage of infinite series is that they are easier to calculate ...

No, they are much more difficult to calculate, but they convey more meaning. Infinite series are why Pi isn't just a number, any more than e is.


> False. 22/7 is not valid. 355/113 is not valid.

Right, those are approximations. Those are not what we are talking about.

> This is valid: https://www.dropbox.com/s/423ieio63bsuaat/pi_w.png

I agree that it is valid. It is one of many ways of calculating the value of pi. Here is another one: http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Pi-... Also there are iterative algorithms. http://en.wikipedia.org/wiki/Pi#Computer_era_and_iterative_a...

> No, they are much more difficult to calculate, but they convey more meaning.

Again, I didn't mean approximations like 22/7. I was comparing them to e.g. the geometric method which took hundreds of years to extend to a few hundred digits. The geometric method conveys the same "meaning" because it is an exact description of pi, just like the infinite series and the iterative algorithms.

> Pi isn't just a number, any more than e is.

They are both just numbers. e is not infinite, in fact it is less than 3.


> They are both just numbers.

No, they are ideas. The base of natural logarithms isn't an arbitrary number, it has special properties. It's the same with Pi -- they're ideas that happen to be expressible as numbers. But their numerical value is less important than their identity as ideas.

> e is not infinite, in fact it is less than 3.

Straw man. No one claimed otherwise. But e appears to have an infinite digital sequence, i.e. is "normal" in the mathematical sense.

But, as with Pi, the fact that e is likely "normal" is much less important than the idea it represents.


OK. It sounded like you were saying that Pi is special just because there is an infinite series that describes it. It's trivial to make an infinite series that sums to any number. http://en.wikipedia.org/wiki/Series_%28mathematics%29#Conver... So the number 2 is just as "infinite" as pi.


> It sounded like you were saying that Pi is special just because there is an infinite series that describes it.

That would be because Pi is special because there are infinite series, and integrals, and mathematical identities, and limit expressions, that describe it in ways that give it a special meaning.

> So the number 2 is just as "infinite" as pi.

You're confusing the existence of a summation with its outcome. Obviously the sum of 2^-n for n between 0 and infinity (inclusive) is equal to 2, but that doesn't make 2 an infinite digital sequence, or in any other sense "infinite".


This is similar to the Infinite Monkeys theorem and is more nuanced than many people think.

http://milesmathis.com/monkey.html

http://en.wikipedia.org/wiki/Infinite_monkey_theorem


I did a quick test for this, would add a comment to the stackexchange thread, but it's closed now.

Test at https://github.com/walle/pi2ascii

The algorithm can be greatly improved though.


You could pose the question the other way around: is the infinite hidden within the finite? This philosophical question is probably outside the realm of mathematics, but an interesting thought-exercise.


> You could pose the question the other way around: is the infinite hidden within the finite? This philosophical question is probably outside the realm of mathematics ...

There's a reason it's outside the domain of mathematics -- it contradicts the definition of infinite. By definition, an infinite set cannot be a subset of a finite one.

> ... but an interesting thought-exercise.

Briefly, but easily answered. :)


I came to think of it after reading part of one of the answers in OP link:

"So yes, it has the story of your life -- but it also has many false stories, many subtly wrong statements, and lots of gibberish."

The "gibberish" is bound to contain many truths. This kind of encoding is very different from the mathematical idea of subsets. I am probably bending concepts beyond breaking point here, and you are correct.


But look at at the plain meaning of the word: "infinite" is "in finite".

More seriously, if you aren't careful in your interpretations, you observe that the real interval [0,1] has finite measure but infinite cardinality. And the rational interval has 0 measure but infinite cardinality, so you do find infinite in the finite.


If computing ever got powerful enough to compute pi to an nth amount of digits nearly instantaneously, would it be possible to compress everything by just saying "get the digits between x and y"


As the other replies have noted, you would generally not actually save anything. Let's put some numbers behind that.

Consider all possible strings of n bits. There are 2^n of these. If each is represented by an index into pi, we have 2^n indexes. The best possible case is that the indexes all point within the first 2^n bits of pi, which allows them to be represented by integers from 0 to 2^n-1. Representing such an integer takes n bits.

Result: representing an n bit string by its index into pi takes in general at least n bits to represent the index. There is no compression to be had here.


You can do it but it wouldn't compress most things because the x and y will be larger than the input.


Yes, but not in a way that would be useful. For a sufficiently complex message, the index into Pi that matches it might well be longer than the original message. So, yes, but no gain -- the result would be larger than the original.

This would be true for a huge set of Pi digits and some imagined fast way to gain access to the right index -- the index into the set would likely be larger than the message it indexes.


Well, obviously, everything exists in the infinite - somewhere. A completely trivial realization, however.


This is not true - you can have infinite things without them containing evertthing. So no, this is not obvious. Indeed, pi may not be normal, so there may be things not contained in it.


It represents a fractal.


It isn't a gag. It is a (conjectured, tested, not proven) mathematical fact explained on the SE page.

It is relevant to hackernews as a illustration of the effectiveness of seemingly pointless pictures of text as memes, obnoxious though they be.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: