Hacker News new | past | comments | ask | show | jobs | submit login
Is mathematics mostly chaos or mostly order? (quantamagazine.org)
107 points by baruchel 23 hours ago | hide | past | favorite | 74 comments





> the mathematical universe, like our physical one, may be made up mostly of dark matter. “It seems now that most of the universe somehow consists of things that we can’t see,”

Not heaps fond of relating invisible things in the mathematical universe to dark matter! Although maybe both might turn out to be imaginary/purely-abstract? Imaginary things can absolutely influence real things in the universe, it's just that they are not usually external to the thing they are influencing. If I imagine making a cake say, and then I go ahead and make the one I imagined, the 'virtual' cake was already inside me to begin with, and wasn't 'plucked' from a virtual universe of possible cakes somewhere outside my knowledge of cake-making.

Something nags at the back of my mind around this about maths though, as if to suggest that as soon as there was one-of-anything that was kinda an 'instantiation' of the most abstract "one" object from the mathematical universe.. (irrespective of what axioms are used as long as they support something like one) But I doubt there's never been exactly-PI-of-anything in the real universe, just a whole bunch of systems that behave as if they know (or are perhaps in the process of computing) a more exact value! (spherical planets, natural sine waves etc!)

Very interesting article, I wish my math was stronger! I can just skirt the edges of what they're actually talking about and it's tantalizing! Would love to know more about these new types of cardinal numbers they've developed/discovered.


An interesting thing about the quote you highlighted is that it's already true about the set of real numbers itself. The set of real numbers that can be precisely, individually identified is a countable subset of all real numbers. That means the vast majority of real numbers, an uncountable amount of them, can not be individually defined and thought about.

This is subtle and a simple counting argument (definable means satisfies a finite formula, there are only countably many finite formulas, there are uncountably many reals, therefore there must be undefinable reals) doesn't work, because "definable in ZFC" is not something that is formalizable in ZFC and so the usual set-theoretic counting arguments don't work.

So it is in fact possible and consistent with ZFC that all reals are definable.

See: https://mathoverflow.net/questions/44102/is-the-analysis-as-...


Thanks, that's a wonderful link and a nice puzzle to think about. The best intuition I have for it is that since the predicate "isDefinableReal(x)" is not itself definable in first-order set theory, there is no way to construct the set of all definable reals in the first place. Thus saying it's countable is basically meaningless - what, exactly, is countable?

If you use ZFC+Consistent(ZFC) as your meta-theory, and within it consider a model of ZFC, then surely one can consider the set (in the meta theory) of sentences which pick out a unique real number in the model, and then the set of real numbers in the model which are picked out by some sentence? It might not be a set that belongs to the model, but it’s a set in the meta-theory, right?

And, I imagine that the set of real numbers of the meta theory could be (in the meta theory) the same set as the set of real numbers in the model?


You can do this, but things get strange in the meta-theory. Some models of ZFC are countable according to the meta-theory! And some of them have models of the reals that are countable according to the meta-theory. There's no contradiction here, because what the meta-theory thinks "countable" means has nothing to do with what the inner model thinks "countable" means.

(for an extreme example of this, by the Löwenheim–Skolem theorem there are countable models of ZFC)

So you can do what you are suggesting, and you will of course get a countable set of reals (or what are reals according to the inner model), but they might not be countable according to the inner model. They might not even be a set according to the inner model, and there are even inner models that think you've got all of the reals!

(see https://mathoverflow.net/questions/351659/set-of-definable-r... pretty heavy reading)

So the statement "the set of definable reals is countable" is nonsense - you're talking about things that live in different universes of meaning.


That is very interesting I agree, and certainly any list of descriptions/identifiers must be countable, though I wonder if there's any validity in descriptions that describe things in aggregate?

It's certainly a brain-bender that even in the unit interval if we imagine filling in all the the rationals and then adding in the describable-irrationals like PI/4, sqrt(2)/2 and so on.. that this still does not even come close to covering the unit interval - or any interval - of Real numbers! My imagination sees a line with a heck of a lot of dots on it, but still knowing that there clearly still uncountably-more values that are not covered/described! Amazing! The continuum (Real numbers) is such a fascinating concept!


Almost all real numbers are normal numbers, which don't even have a finite representation.

Sure you can assign them to an arbitrary set, but you don't have access to the value.

It is a hay in the haystack problem, where you really only have access to the needles, not the hay.


> Almost all real numbers are normal numbers, which don't even have a finite representation.

Plenty of normal numbers have a finite representation from which digits can be efficiently extracted. E.g., Champernowne's constant (in any base) is normal, and you can find its digits with a relatively simple algorithm.

All computable reals can similarly have their digits extracted by some algorithm or another, even though it may take a long time. I wouldn't call that "not having access to the value". Of course, uncomputable numbers are a different story, but they have nothing to do with normality in any base.

And of course, radix representations are not the only way to evaluate real numbers. E.g., you could represent them with simple continued fractions (which would still allow addition, multiplication, comparison, etc.), and then you could write out any quadratic irrational with a periodic expansion.


"almost all" or "almost everywhere" is in italics because I mean it in the measure theory sense.

Meaning it holds for all elements of a set except for a subset that has measure zero.

Yes some normal numbers are in the constructable reals, but it is a measure zero subset.

You are putting your hand in the haystack and only finding needles, finding the hay in the haystack is the problem here.


It's even more extreme than that!

Take the (uncountable) set of Real numbers. Remove the normal numbers, which is almost all of them in the sense that the probability that "a uniformly randomly chosen real number is normal (and therefore also undescribable)" is 1. The remaining set of numbers, which has measure 0 in the Real numbers, is still uncountable, meaning that the proability of randomly choosing a describable number in that set is again 0.

I'm not sure how deep this chain can go. Google AI says "only 1 steps" but it's not admiting the case described in this comment.


My intuition about the question is related: the set of all Turing machines (algorithms) is countable, but the set of all languages (problems to solve) is uncountable. If you take mathematics to be the bigger, uncountable picture, it’s mostly chaos, but if you limit consideration to algorithms, then it’s mostly order.

> the set of all languages (problems to solve) is uncountable

The set of all problems that can be described by a finite description is countable. Why would we care about the rest of them?


Because we're theorists.

> But add a smaller cardinal to one of the new infinities, and “they kind of blow up,” Bagaria said. “This is a phenomenon that had never appeared before.”

I have to wonder just what is meant by this, because in ZFC, a sum of just two (or any finite number) of cardinals can't "blow up" like this; you need an infinite sum. I mean, presumably they're referring to such an infinite sum, but they don't really explain, and they make it sound like it's just adding two even though that can't be what is meant.

(In ZFC, if you add two cardinals, of which at least one is infinite, the sum will always be equal to the maximum of the two. Indeed, the same is true for multiplication, as long as neither of the cardinals is zero. And of course both of these extend to any finite sum. To get interesting sums or products that involve infinite cardinals, you need infinitely many summands or factors.)


I suspect they mean "add" in the sense of "add in an axiom asserting the existence of another cardinal". Things like consistency strength of the resulting theory seem to vary wildly depending on what other cardinals you throw into the mix (if I understood the article correctly, haven't read the paper, mea culpa).

Oh, that would make more sense, yeah.

I think you are right. Plain English is a terrible language for higher math. It's extremely misleading (especially when talking about infinite objects and probability, where most of the mystery and confusion comes from terms that aren't precisely defined in the reader's mind).

I enjoyed quanta magazine for a while. But I think they ended up pretty much exhausting the set of things that can be explained to an interested audience with a bit more English, but doesn't take actual mathematics. I know enough about this topic to know that reading the article didn't teach me anything useful because I don't know enough to understand what they are saying. And if I did know enough, this article probably still wouldn't effectively teach me because it's too simplified.

I haven't seen a quanta article in a while that I found useful. I appreciate the attempt but I don't think it works anymore. And I don't think it's their "fault"... I just think the slice of things this works for was smaller than we might have liked.


Math is entirely chaos. In a slangy sense I can't prove the set of math that we would call "ordered" is of measure 0 against all the mathematical structures that "exist", without getting into exactly what that means.

That's also the interesting math, so it is worthy of study. But the math that is interesting is the exception.

A "randomly" chosen function from the set of all possible functions is a function with some infinite input that maps it to an infinite output (with any of the infinite ordinals in play you like) where there is no meaning to any of the outputs at all, indistinguishable from random. (The difficulties of putting distributions on infinite things is not relevant here; that's a statement of our limitations, it doesn't make these structures that we can't reach not "exist".)

It's not amazing that if we take a "wrong" turn down the interesting math we end up in increasing levels of chaos. What's impressive is how interesting the not-pure-chaos subset manages to be, and how well it holds together.


If I had a semester or two of free time I'd love to hit this subject again. I once told my math prof (logician) who made a comment about transfinite cardinals: careful it's powerful but it's power from the devil. I half regret that comment in retrospect.

I've never made peace with Cantor's diagonaliztion argument because listing real numbers on the right side (natural number lhs for the mapping) is giving a real number including transedentals that pre-bakes in a kind of undefined infinite.

Maybe it's the idea of a completed infinity that's my problem; maybe it's the fact I don't understand how to define (or forgot cauchy sequences in detail) an arbitrary real.

In short, if reals are a confusing you can only tie yourself up in knots using confusing.

Sigh - wish I could do better!


> Maybe it's the idea of a completed infinity that's my problem; maybe it's the fact I don't understand how to define (or forgot cauchy sequences in detail) an arbitrary real.

As someone who also has never fully made his peace with the diagonality argument, but just chosen to accept it as true, as a given, this kind of bumps up against an interesting implication of different cardinalities of infinity.

To precisely define an arbitrary real you'd need some kind of finite string that uniquely identifies that real number. Finite strings can be mapped, 1 to 1, to natural numbers. Therefore there can't be a finite string for any real number that uniquely identifies it. Otherwise we'd have a mapping between natural numbers and real numbers.

In fact, the set of uniquely identifiable real numbers is a countable subset of real numbers. [1]

Somehow, this realization has helped me make peace with the uncountability of real numbers.

[1] Sorry if use words like "unique", "identify", "define" in not quite the right way. I hope the meaning I'm going for comes across.


I'm will give this more consideration; thank you for the comment.

For now I just want to add you hit a bit closer into the slight of hand in Cantors argument (for me) which is alluring but hard to surmount in the last 10% of the argument.

The natural numbers are constructible, finite. They are finite to write down. It requires a finite amount of code (tape) to output one etc. The 1:1 mapping business gets the concept of infinity onto the table but without engaging a completed infinity. So far, it's solid followable etc ... now the next 5% you toss real numbers in rhs ... then produce another real off the diagonal for 5% more ... and |Z| /= |R|.

Here real numbers live under the shadow or reflect the light of nats, which is misleading. The reals are not well defined objects.

Now, the realist (the mathematician) will argue: the point of Cantor's argument is not to construct reals as part of the solution to |Z| /= |R|. The point is only to establish there's no bijection. In truth I agree: the focus is on the mapping not getting dragged into the mud of construction.

However, I remain unclear if too much got swept under the rug that (practical minded) argument. I will have to re-read Chatin/Kolmogorov ... so I need 4 semesters now. This is my spooky action at a distance problem.


A way to make peace with the Reals is to understand them as "potential numbers". Every where you look, there is Real number. Everyone logical agrees about that.

But what about where you don't look? Either you take the orthodox axiomatic view that Real numbers are there too, or you take the constuctivist or finitist (or perhaps quantum mechanical?) view that nothing is there until you look, because the act of looking is the same as the act of creation.


I wouldn’t call it quantum mechanical. The “looking” in math is not like the measurement of an observable/operator in quantum mechanics. When you consider a thing in math, there’s no alternative thing that you could have considered instead which would correspond to a different operator that doesn’t commute with the first one.

There are a couple of strategies for understanding the real numbers. One is to write down a definition of real numbers, for example using rational numbers and Dedekind cuts, hoping that what you're describing is really what you mean. The other is to write down the properties of real numbers as you understand them as "axioms", and go from there. An important property of real numbers that always comes up (either as a consequence of Dedekind cuts or as an axiom itself) is the least upper bound property -- every set which has an upper bound has a least upper bound. That's what gives you the "completeness" of the real numbers, from which you can prove facts like the completeness of the real numbers (i.e., Cauchy sequences always converge), the Heine-Borel theorem (closed and bounded subsets of the reals are "compact", and vice-versa), and Cantor's intersection theorem (that the nested intersection of a sequence of non-empty compact sets is also compact).

The diagonalization argument is an intuitive tool, IMHO. It is great if it convinces you, but it's difficult to make rigorous in a way that everyone accepts due to the use of a decimal expansion for every real number. One way to avoid that is to prove a little fact: the union of a finite number of intervals can be written as the finite union of disjoint intervals, and that the total length of those intervals is at most the total length of the original intervals. (Prove it by induction.)

THEOREM: [0, 1] is uncountable. Proof: By way of contradiction, let f be the surjection that shows [0, 1] is countable. Let U_i be the interval of length 1/2*i centered on f(i). The union V_n = U_1 + U_2 + ... + U_n has combined length 1 - 1/2*n < 1, so it can't contain [0, 1]. Another way to state that is that K_n = [0, 1] - V_n is non-empty. K_n also compact, as it's closed (complement of V_n) and bounded (subset of [0, 1]). By Cantor's intersection theorem, there is some x in all K_n, which means it's in [0,1] but none of the U_i; in particular, it can't be f(i) for any i. That contradicts our assumption that f is surjective.

Through the right lens, this is precisely the idea of the diagonalization argument, with our intervals of length 2*-n (centered at points in the sequence) replacing intervals replacing intervals of length 10*-n (not centered at points in the sequence) implicit in the "diagonal" construction.


> The union V_n = U_1 + U_2 + ... + U_n has combined length 1 - 1/2*n < 1, so it can't contain [0, 1].

This argument seems way less convincing to me than the diagonalization argument, because as n is going to infinity that length does become 1.


Then use 1/3 instead of 1/2 for a combined length of 2/3 -- the total length of the intervals can be as small as you like. This hints at the fact that any countable subset of the real numbers is Lebesgue measure zero.

Even using 1/2, the set that remains is nonempty due to the Cantor intersection theorem. The total length of the intervals is 1, which means that the remainder has no "interior" (i.e., contains no open interval), but the converse is not true: removing intervals whose lengths sum to less than one does not mean that the remainder will contain any interval. This is the consideration that allows you to create what are called "fat Cantor sets" -- the middle thirds Cantor set has Lebesgue measure zero, but by removing smaller intervals you can get other, homeomorphic sets that have positive measure.


Then we can crank that 2 up to 3 or 4.

Then let U_i be the interval of length 1/3^i centered on f(i), so that the total length is 1/2, far less than 1.

Even though the supposed "surjection" is infinite, it's still the case that every x in [0,1] would be in in one of the finite U_n and therefore V_n. But every K_n clearly has measure > 0 and is therefore non-empty, and since the K_n are nested subsets, there is at least one special point x_omega that is in all of the K_n.

The "intuitive" problem (not logical problem) with PP's proof is that it relies on measure and completeness, which is far more technologically complex than the decimal diagonizalization argument.

Here is intuitive "rebuttal": the same proof strategy seemingly proves that the rationals are uncountable! (This is of course technically false, because rational intervals are incomplete and all have measure 0 in the first place. But understanding this is much more complicated than imagining an 2-D infinite spreadsheet of decimal numbers between 0 and 1.)


Are these the same proof?

If I let U_i be the interval of length 1/10^(i) centered on f(i), than what I'm saying is pick a different decimal digit to avoid this particular real.


> I once told my math prof (logician) who made a comment about transfinite cardinals: careful it's powerful but it's power from the devil. I half regret that comment in retrospect.

You're in good company -- from Penelope Maddy's "Believing the Axioms"[0]:

---

Measurable cardinals were introduced by Ulam in [1930], where he proved that they are inaccessible. They are now known to be much larger than that, larger than all the hyperinaccessibles, Mahlos and weakly compacts. Indeed, because of their power, they are probably the best known large cardinals of all. The voice of caution reminds us that they were invented by the same fellow who invented the hydrogen bomb.

---

0: https://jwood.faculty.unlv.edu/unlv/Articles/Maddy1.pdf


Cantor’s original proof of the uncountability of the reals didn’t use a diagonalization argument, it used order + completeness and in fact applies to any complete poset. https://en.wikipedia.org/wiki/Cantor%27s_first_set_theory_ar...

Likewise his proof that there is no surjection from a set to its power set uses a more general diagonalization argument that doesn’t make any uncomfortable assumptions: https://en.wikipedia.org/wiki/Cantor%27s_theorem


> I've never made peace with Cantor's diagonaliztion argument

Maybe you'd prefer a purely set-theoretic one:

---

Let R be a set. Let S be a set of all subsets of R.

We want to prove that |S| > |R|, by proving that a bijective function from R to S cannot exist. We will do that by assuming that it can, and then deriving a contradiction.

Assume there is a bijective function f : R -> S. Define D = { r ∈ R | r ∉ f(r) }. Since f is a bijection, there exists some r₀ ∈ R such that f(r₀) = D.

However, by the definition of D, we have:

- If r₀ ∈ D, then r₀ ∉ f(r₀) = D, which is a contradiction.

- If r₀ ∉ D, then r₀ ∈ f(r₀) = D, which is also a contradiction.

Therefore, our assumption that there exists a bijective function f : R -> S must be false.

Therefore, |S| > |R|.


If it's the idea of completed infinity that's the objection, then it's the first step, constructing the powerset, that would be problematic. Various forms of finitism would not accept that one can 'take all the subsets, finite or infinite, and quantify over them' and obtain a meaningful result past the formal level.

They should still then accept that there is no surjection from a set to a set that is a set of all subsets of that set (because of there being no such set).

how much of modern set theory is reverse engineered from axioms rather than discovered. we're always building highways through a forest we haven't mapped, assuming every tree will fall in line. and suddenly these new large cardinals show up that don't even sit neatly in the ladder. it's maynot be failure of math,but failure of narrative. we thought the infinite was climbable, now it's folding sideways. maybe the math we're building is just a subset of what's possible, shaped by what's provable under our current tools. lot of deep shit probably hiding in the unprovable.

This isn't really how it went down, historically. They considered lots of different large cardinals, and then they turned out to be linearly orderable by consistency strength. And then it's natural to wonder if it's a general rule.

Gregory Chaitin already proved that there are mathematical truths that are completely random.

The orderly stuff are already well studied

I’ve always considered math is something that is discovered, neither chaotic or orderly, it just… is. Really brilliant people make new discoveries, but they were there the whole time waiting to be found.

This article seems to kind of dance around yet agree with the discovery thing, but in an indirect way.

Math is just math. Music is just music. Even seemingly-random musical notes played in a “song” has a rational explanation relative to the instrument. It isn’t the fault of music that a song might sound chaotic, it’s just music. Bad music maybe. This analogy can break down quickly, but in my head it makes sense.

Disclaimer - the most advanced math classes I’ve taken: calc3/linear/diffeq.


Mathematics isn't monolithic—it depends heavily on the axioms you choose. Change the axioms, and the theorems change. ZFC, ZF¬C, intuitionistic logic, non-Euclidean geometry—each yields a different “math,” all internally consistent. So it’s not right to say math “just is” in some absolute sense. We’re not just discovering math; we’re exploring the consequences of chosen assumptions.

For instance:

Under Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC), every set can be well-ordered, but we do get the Hahn–Banach paradoxes.

Under ZF without Choice, analysis as we know it no longer holds.

In constructive mathematics, which avoids the law of the excluded middle, many classical theorems lose their usual formulations or proofs.

Non-Euclidean geometries arise from altering the parallel postulate. Within their own axioms, they are as internally consistent and "natural" as Euclidean geometry. Do non-intersecting lines exist in this universe? I've no idea.


This just steps one meta level higher. Yes, you can make your object of analysis the axioms and what they lead to and proof theory etc. But now you've just stepped back one level. What are the axioms that allow you to derive that "ZFC leads to Hahn–Banach paradoxes"? Is this claim True and discovered or is it in itself also simply dependent on some axioms and assumptions?

This is part of a broader meta-ization of culture. Philosophers are also much more reluctant to make truth claims in the last century compared to centuries ago. Everything they say is just "To a Hegelian, it is {such and such}. For Descartes, {x, y, z}." If you study theology, they don't teach with conviction that "Statement A". They will teach that Presbyterians believe X while the Anglicans think Y, and the Catholics think it's an irrelevant distinction. Of course when push comes to shove, you do realize that they do have truth claims, and moral claims that are non-negotiable but are shy to come forward with them and explicitly only talk in this "conditional" "if-then" way.

In fact many would argue that math is not too far from theology. People who were obsessed with math limits, like Gödel, were also highly interested in theology.

I guess physics is the closest to still making actual truth claims about reality, though it's also retreating to "we're just making useful mathematical models, we aren't saying that reality is this way or that way".


No, you are wrong. 90% of Philosphy it's bullshit about giving a fake truth status depending of WHO said what. Meanwhile, Math and Science always put FACTS over personas.

About the axioms, not really. Axiom sets is mostly there just as a 'short hand' to quickly describe a context we're talking about, but ultimately you could just do away with them. E.g. if we let A be the set of axioms from some theory (e.g. set theory, number theory etc.) and you have a mathematical statement of the form X => Y within that theory, you could just as well consider the statement "A ^ X => Y" in the purely formal system without any axioms at all, then it is purely a logical question (essentially, if X => Y is a theorem within theory A) and more objectively true than "X => Y" which would be theory-independent.

The overarching point still stands: our formal systems are just models built to describe the patterns we observe. In that sense, math “just is.” The fact that some models aren’t compatible with others doesn’t undermine that—it just shows they’re incomplete or context-dependent views into a larger structure.

> all internally consistent

Well, we hope.


What makes you consider it a "discovery" instead of a creation of us humans?

I am more on the side of seeing maths as a precision language we utilize and extend as needed, especially because it can describe physically non-existent things e.g. perfect circles.


I rather think the discovered/invented thing is just semantics.

You can say that literally anything was "just discovered".

Thriller by Michael Jackson? Those particular ordering of sound waves always theoretically existed, MJ and various sound engineers just discovered them, they didn't create anything.

The cappucino? It's just a particular orderly collection of chemicals, such a collection always theoretically existed. Those baristas are explorers, discovering new latte art shapes, nothing creative there.

Cantor's diagonal argument? Yep, those numbers where just waiting to be discovered and written in that order.

And so on. The entire argument is meaningless, pointless philosophizing. Nobody wastes their time saying latte art was discovered rather than invented, but somehow when it comes to mathematics this is considered a deep and worthy discussion.


To me, the difference is: the way to __make__ music was invented, it was already there to be discovered.

Didn't mean to rub you the wrong way.


> Didn't mean to rub you the wrong way

Not at all, you sparked thought in an area I find fascinating (philosophy of maths). Albeit I find this specific topic a bit too commonly discussed relative to how important it is, but I'm still happy to talk about it and share my thoughts.


> Didn't mean to rub you the wrong way.

To me, it doesn't sound like you did. The parent comment of yours just stated, albeit bluntly, that the "invention" and "discovery" are fundamentally the same. Whether we use one or the other depends on how big the size of the space of the possibilities feels to us. Math has a very rigid and easily enumerable space of possibilities (strings of symbols), so we call it "discovery", while cooking has an enormous space of possibilities (countless pieces of meat and vegetables, each unique in its configuration of atoms, etc.), so we call it "invention".

When you invent a way to make music, did you really invent it? Or did you simply discover a particular configuration of atoms that can produce sound when handled in a particular way, that was already there in some platonic universe of ideals? Either way, the end result is the same. Nothing really changes.


> Math has a very rigid and easily enumerable space of possibilities (strings of symbols), so we call it "discovery", while cooking has an enormous space of possibilities (countless pieces of meat and vegetables, each unique in its configuration of atoms, etc.), so we call it "invention".

I think you've made a good point here.

Although, to nitpick a bit: Both spaces (cooking and maths) are infinite, and for most fields of math, uncountably infinite. The difference is in the numbers we are dealing with. For cooking, it's mixtures of trillions of molecules. For maths, it's usually in the order of thousands of symbols (although those ellipses do some infinitely heavy lifting!).


Is TRIZ (and its great-great-grandchildren, LLM GenAI) invention, or discovery? https://en.wikipedia.org/wiki/TRIZ

I like to think of axioms as "created" while the consequences (i.e. theorems) of said axioms are "discovered". You can't create logic consequences (conclusions) given a set of axioms, but you can certainly create the axioms (premises).

It's not clear to me why people think perfect geometries do not exist, they occur all the time in physics.

Of composite matter, sure, because it's composite in a certain sort of way, you do not get perfect circles. But the structure of macroscopic material does not exhaust the physically.

Even here, one could define some process (eg., gravitational) which drives matter towards being a perfect circle, because perfect circularity is a property of that process. This is, as a matter of fact, true of gravity -- if it weren't we'd observe violations of lorentz invariance, which we do not.


When was the circle discovered? When it became essential to physics?

Perfect in a single-body universe, perhaps, but the gravity field of a particle is perturbed by other nearby particles - where "nearby" is relative to precision desired - and therefore never a perfect sphere.

Or, to put it another way, so-called "perfect circles" exist in a real, 4-D, wibbly-wobbly gravity-distorted space, and are no longer perfect Cartesian circles.

They still only exist theoretically; not in practice.


Circularity is still a property of the process. One requires perfect circles to describe it.

It is also easy enough to construct circular state spaces, and the like.

The idea that what's real is simply the geometry of macroscopic visible matter, or even of matter alone, is a nonesense.

The world is "immanently abstract", and possess primeness, circularity, etc. in itself -- not as something merely imagined. This is obvious from the physical description of its evolution.

Irregularity, of this kind, is derivative of a geometrical reality. The irregular doesn't govern the irregular, if it did, there would be no structure whatsoever.


Perfect circles just exist on probabily using 2pi for random directions and overall statistics.

Large cardinal axioms are the paradigmatic example "invented" math.

Measuring. Movement it's applied calculus. Pi it's not magical, it's all the distances from a point related to the distance itself.

We need a word-less world of math where all meaning is derived from figures. WOrds are confusing.

"If you can't describe the meaning using only pencils and compass, you don't mean it"


And especially when we mix different categories. Like saying about any infinity as about an object is misleading, because it's rather a process

Infinities (transfinite cardinals) in the sense used by the article are absolutely objects. We’re not talking about infinite sums or other sequences and their limits. (And limits aren’t really “processes” either – the limit of the sequence 0.9, 0.99, 0.999, … is exactly 1, as a well-known example which nonetheless is controversial among people who don’t know what limits are.)

Sure, if you accept reifying concepts into objects as valid. But that is a gateway to misery.

What's the difference? How is the concept of a transfinite cardinal less of an object than, say, the concept of a set? Or a real number? All are well enough defined that you can do useful math with them, and that's really all that matters.

Object in OOP ?

Mathematical objects, like numbers, sets, matrices, functions, …

I can think of N as a process in a sense, because I can keep adding a number. But I can't think of R as a process like this, specifically because there is no surjective mapping from N to R.

How would you think of R as a process?


It is amazing what Euclid was able to prove and, frankly, even imagine using just geometric figures.

Nowadays, we have symbolic notation. The only research articles I read are for epidemiology, so I don't know how much notation is used in pure math journals. But I can't remember seeing any notation in those articles beyond what one would encounter in high school. I guess authors see more value in deceptive narrative and descend into strict logical languages only when necessary.


Not just Euclid - even Newton's Principia used geometric demonstrations, rather than algebraic ones.

mathematics is a human construct, one among many others, such as order and chaos. One of the characteristics of human constructs is the never ending battles to redefine them as is evident in any investigation of the historical uses and definitions of these ideas. while we can point to any number of ordery things that are admired, no one can point to an orderly framework that shapes our universe, well no one who does not have that flash fryed all seeing thousand yard stare that will stay with you, which is the little game we are blithely toying with in the title of the article..........as in be carefull what questions you ask, as you just might get an answer

Isn't that a bit like asking if computing is mostly ones or mostly zeroes?

It's the relationship between order and chaos that matters. Everything interesting always happens on the boundary between the two.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: