Terence Tao wrote a blog post years ago dividing math education into 3 stages: pre-rigourus, rigourus, and post-rigorous. That distinction seems relevent here.
Every mathamatician knows that the integer 1 and the real number 1 different objects [0]. We could make every invocation of a canonical isomorphism explicit, but that would make all of math far too unwieldy to do.
This is the core problem with formal proof verification. Current provers need you to prove things at such a low level that they are simply unwieldy to use. That does not mean we need to reinvent all of math. It means we need to improve formal verification tooling. Maybe we also need to train mathamaticians in formal verification systems. In much the same way that they had to be trained to use computer algebra systems. We didn't reconstruct all of math. We just learned how to work within the limitations of tools when we use those tools.
[0] Unless you go out of your way to construct them to be the same thing. There is not actually any official definition of, for example, what the integer 1 is. Since so many definitions of it are equivalent, that is a fundamentally pointless conversation to have. A few textbook authors need to make a decision for pedalogical purposes. No one else cares.
You have to use a very funny notion of ⊂ for that to make sense. Mathematicians recognize that there is a lot of funny business going on behind the scenes… if you use the typical construction of these sets, then ℤ ⊂ ℚ is for sure false under the set-theoretic definition of what subsets are.
So you are going to get a lot of funny explanations of why you can still say “ℤ ⊂ ℚ” and what that means. And what is “the same as” anyway?
Yeah, I'm very familiar. The 'up to isomorphism' applies to dedekind cuts et al.
I'm pointing out that the nature of R is independent of its definition, and that any mathematical formal system you invent to construct R (up to isomorphism) is just a description. R exists independently on its own.
The constructions of R are useful, because you can show things about them that could be applicable to R.
However, the article gives a rhetorical question of whether you should be able to say 0 is in 3. No you should not, because while there are many constructions of the natural numbers, the universal property they all share (that there is a zero and a successor) does not permit any notion of inclusion. The construction of the naturals as sets would permit such notions, but these are not relevant to any discussion of the natural numbers.
Put in homotopy type theory terms, the natural numbers are the equivalence class of all representations of the natural numbers, and all propositions over the natural numbers are the equivalence classes of all propositions over all representations of the natural numbers.
Since one cannot draw an equivalence between '0 \in 3' and any proposition from another construction of the natural numbers (I believe), then the term '0 \in 3' is not a proposition over the natural numbers.
Put another way, I posit that there are constructions isomorphic to the natural numbers whose propositions cannot be mapped onto the space of all propositions over the construction of the natural numbers as sets. Whereas, all constructions of the natural numbers will permit a proof of the associativity of addition, or the infinity of primes. So these proofs are what should be accepted as 'proofs' over the natural numbers, but '0 \in 3' would not, because it's missing in many systems. '0 \in 3' could be used in the set representation to be the definition of less than, but it is not a statement that applies to all representations.
> I'm pointing out that the nature of R is independent of its definition, and that any mathematical formal system you invent to construct R (up to isomorphism) is just a description. R exists independently on its own.
Isn't that a bit metaphysical and open to interpretation? I would say R only meaningfully exists as defined. A mathematical "truth" is either an axiom or it has a proof?
(Not that I think that makes a difference for the original argument about 1)
See my other comment. You can define R(ℚ) as Dedekind cuts, see that there is a monomorphism I from ℚ to R(ℚ), and then set ℝ = (R(ℚ) \ I(ℚ)) ∪ ℚ. Then you have ℚ ⊂ ℝ and ℝ ≅ R(ℚ). No problem. Except of course if you work in a theorem prover that doesn't support a construction like that.
I don't think it is a "funny" notion of inclusion, but rather that once ℚ is defined, ℤ is silently redefined to be that one subset of ℚ that was used in the construction.
You can't do this in general, and everywhere, so it's still not rigorous. Consider the embedding of the nats into the reals, where the reals are defined as a subset of the binary sequences (e.g. as the usual infinite binary expansion for (0,1] choosing those that do not terminate in the zeros sequence, but prefix with a Elias-gamma-coded, zigzag-encoded integer). But the usual definition of sequence is as a function from the nats; so are you again going to redefine your reals in terms of that?
In the end you still need to maintain a correspondence between the embedding and the original set as in the typical way where you do that and consider the subset notation as a shorthand that requires you to "lift" or "wrap" through the correspondence wherever necessary.
I think you misunderstood the parent comment. They're not talking about defining the reals by setting up a mapping where each real number corresponds to a unique natural number. They're talking about defining the reals by setting up a mapping where each real number corresponds to a unique mapping from the natural numbers to {0, 1} (i.e. a unique binary sequence). The set of all binary sequences is isomorphic to the power set of the natural numbers, which is uncountable.
I'm not a set-theorist, but a set-theorist friend of mine once taught me that you can turn a countably-infinite set (such as the integers) into an uncountably-infinite one (like the reals) by applying the 'power set' operation (the set of all subsets).
Not heaps sure what this really means with respect to whether 1 the integer is really completely related (as in, equal, or the-exact same-thing) to 1.0 the real though. Kinda seems like it might still need a bit more information to fully identify a real, even when it happens to be infinitely-close to an integer?
The integer part is easy, since we already have the integers. Once you have D=[0,1), then you can define R=ZxD. That is to say, this definition of R seperates out the integral and fractional components of every real number.
There’s a very deep problem with that—every time you invent a “superset”, do you then have to redefine the subset to be a subset of that “superset”?
There is an infinite chain of supersets of the rational numbers, real numbers, etc.
Think of it like this… we can define 0, and then define 1 as the successor. Repeating this, we can have a definition for every finite number. But we cannot do this the other way around. We cannot start with ∞, define the predecessor to ∞, and then somehow get back to 0.
In other words, if you want to work backwards and say that smaller sets (like the natural numbers) are a subset of the bigger sets (like complex numbers), then you have to pick a “biggest set” containing all numbers, which is unsatisfactory. Somebody always wants a bigger set.
> There’s a very deep problem with that—every time you invent a “superset”, do you then have to redefine the subset to be a subset of that “superset”?
I have thought about this too, and I'd initially agree with you. but I thought at some point how mathematical history is not extremely dissimilar from this. put in very rough terms:
at first humans discovered/invented numbers (i.e. the counting numbers); these started at number one — the first number. later on, at some point we had to go back and realize that there was a zero before number one which "silently" redefined the first number as zero and this created the natural numbers a the modern set-based N
edit: adding this alternative rendering of my intended comment triggered by a condescending reply: "mathematics silently redefines stuff all the time. deal with it"
There is a different answer here which is more satisfactory… which is to use notions of equality other than set theoretic equality. Which is what the article is talking about.
"Nothing" as a concept always existed of course. But it wasn't considered a number, generally. Certainly no one counted "nothing, one, two", and even today natural language doesn't include "nothing" or some equivalent as a numeral noun.
You need to be careful about the phrase “considered a number” since I believe one, or unity, was also not considered a number by some ancient civilisation - i.e. a number was only multiple copies of unity.
[I believe this YouTube video goes into more detail in its discussion of why 1 was not considered Prime in the ancient world: https://youtu.be/R33RoMO6xeA]
That's where I'm quite skeptical. Imagine you are in charge of trade or rationing important village resources in the winter. It just seems to me almost necessary that people would have a way to symbolically indicate that all the sheep are gone. As opposed to just not having any symbol for that at all.
Zero entered western writing systems through India, with limited usage in math before that. It seems like it was invented/borrowed as part of switching from additive numbers (such as Roman numerals) to positional numbers.
> In other words, if you want to work backwards and say that smaller sets (like the natural numbers) are a subset of the bigger sets (like complex numbers), then you have to pick a “biggest set” containing all numbers, which is unsatisfactory. Somebody always wants a bigger set.
Exactly what we did in the Analysis I course I attended during my bachelor: defined the reals axiomatically, and the N as smallest inductive(?) subset containing 0.
Satisfactory or not, it worked well for the purpose. And I actually liked this definition, if anything because it was original. Mathematical definitions don't need to have some absolute philosophical value, as long as you prove that yours is equivalent to everyone else's it's fine.
> as long as you prove that yours is equivalent to everyone else's it's fine
That’s exactly the point I was making in the first place.
“Unsatisfactory” just means “unsatisfactory” in the sense that some mathematicians out there won’t be able to use your definitions and still get the subset property. This means that you are, in all realities, forced to deal with the separate notions of “equivalence” and “equality”. Which is what the article is talking about—all I’m really saying here is that you can’t sidestep equivalence by being clever.
Another way to see it is to prepend every mathematical text involving ℤ with "for every ℤ such that [essential properties omitted]", so that you can apply it to several definitions of ℤ, rather than awkwardly redefining it after the fact. This is the mathematical equivalent of "programming to the interface", and is actually how mathematics are often formalized in modern proof assistants: as huge functors that abstract these details away.
It makes more sense to define it the other way around:
Q = Z u Q’, where Q’ is the set of all rational numbers that aren’t integers.
Redefining the smaller set can’t work because there may be more than one larger set, e.g. split
complex numbers vs regular complex numbers. But you can define a larger set to strictly extend a smaller set.
How about this example: the natural number 0 is not a member of the integers, from a set theoretic perspective.
In the natural numbers,
0 = {}
However, {} is not an element of the integers.
This is not something I expect to be easy to understand. This is the standard set theoretic definition of integers that mainstream mathematicians use. This is not some esoteric, fringe theory.
Yes, the empty set is a subset of all sets. I think some wires got crossed somewhere because whether zero is a subset of some other set doesn’t fit into the questions we are trying to answer.
You could pick a construction where the natural numbers are a subset of the integers. This is trivial, but this is a poor strategy overall, because you can always find a bigger set of numbers to work with. You can’t take the “biggest” set of numbers and then define all other sets of numbers as subsets of that. It would be kind of like trying to count down from infinity.
The Surreal Numbers manage to be a sort of "biggest" ordered field, in NBG, though they're a proper class not a set. All other ordered fields are subfields of the Surreals. Of course "ordered" is doing a lot, since it excludes the complex numbers, vectors, bivectors, etc. Whether elements of some object that isn't an ordered field should be considered "nubmers" is related but different question.
Yes, {} is a subset(⊂) of all sets, but it is not a member(∊) of all sets. For instance, {} is not a member of {{{}}}. In the Von-Neumann definition, 1:={0}={{}}. So, 0∊1, but 0 is not a member of {1} which is the above set. https://en.wikipedia.org/wiki/Set-theoretic_definition_of_na...
Conceptually, yes, every integer is also a rational number. But they are represented as different objects.
It's a bit like saying every int32 is also a double. Yes, every value of int32 fits into a double, but the bit pattern is different.
The canonical construction for rational numbers is pairs (a, b) which we interpret as a/b. An integer k "is the same as" (k, 1).
So it might be more correct to say every integer has the same value as some rational number. Of course this distinction is pointless most of the time, so we don't worry about it.
This is not unique to integers and rationals. It also applies to naturals and integers, rationals and reals, etc.
I understand that this might be a problem in a programming language, even if you decide to ignore that mathematical sets have infinitively many elements. When I programmed an RPN calculator supporting quotients and complex numbers, I had to grapple with the fact that 1 (one) is a quotient (with denominator 1) and a complex number (with imaginary part 0). I solved it by defining the functions and operators taking a certain type and coercing the arguments to that type. One example, the logarithm can take complex numbers. If I had a quotient argument, I coerce it to a complex number before passing it to the logarithm.
However I assumed that this is only a problem in programming languages. I am a bit surprised that mathematics also seems to be affected. I am going to study it a bit.
Practical mathematics mostly isn't affected. That's what the earlier commenter was referring to with Tao's stages of maturity.
In the pre-rigorous stage, you don't know it's an issue. In the rigorous stage, you know it's an issue. In the post-rigorous stage, you know it's not an issue (i.e. you know you know how to write down all the details if you needed to, and you know it will all work how you might hope it would, and so you don't need to).
An isomorphism is a way to relate two different sets such that each element is paired with exactly one in the other set such that it does not matter if you do operations before or after.
In my calculator the set of reals is isomorph to the set of complex numbers with imaginary part zero:
ℝ ⬄ { c | c ∈ ℂ and im(c) = 0 }
So I can coerce to complex numbers without impunity because of that isomporphism! And I know it is an isomorphism because if adding two reals then coercing to complex is the same as coercing first then adding.
TIL: In a way mathematics has types like programming languages.
I hope I am not too far off here. I didn't look up anything, this all went into my head this morning.
This is sort of true, but there are isomorphisms which aren't "good enough" in some sense. E.g. if you're familiar with linear algebra, there's a thing called a dual space, which (for finite dimensions) is isomorphic to your space, but when you change coordinates, dual spaces change their coordinates backwards. Or for complex numbers, when you extend R by adding `i` (sometimes written C=R[i]), if you look carefully, you'll see that you could have instead added an element α = -i, and you'll notice that R[α] is isomorphic to R[i] in a way where the isomorphism maps R to itself; in a sense you can't tell whether you added i or -i, and this degree of freedom is important when studying field extensions.
The closest thing I'm aware of to "pretty much equal" is "unique up to unique isomorphism", so they may not be equal, but they're isomorphic, and there's no flexibility to make any choices of which isomorphism to use. But "isomorphism" also implies you have a particular context/structure in mind that you want to preserve, e.g. a set isomorphism (a bijection) may not be a linear isomorphism (~an invertible matrix). In practice, you may be working in multiple contexts at once, so you invent Functors which map one type of morphism to another, and now you care about Categories.
To expand: they're "equivalence classes". The digit 1 represents a value, in the same equivalence class as 1/1, 2/2, 1.0, 1+0i, {0|}, 1.0+0i+0j+0k, etc. ad infinitum.
You would usually use something like \hookrightarrow instead. The set is isomorphic to a subset.
In the same way the real line is "included" in the plane (\R \times \{0\}) or the sphere is not only a subset of the whole space but a submanifold.
Math uses lots of polymorphism and overloaded notation, but that is also part of the beauty, where "A=A" can actually be a deep result about concepts being compatible.
There is a subset of Q that is canonicaly isomorph to Z. Probably the distiction is too technical unless you want to do some weird advanced algebra. (Note that in most weird advanced math the distiction doen't matter.)
So in most case, mathematicians and non mathematicians just write Z ⊂ Q and live happily ever after. But if you get supertechnical you should use scare quotes Z "⊂" Q or to look more proffesional Z ↪ Q (because tha inclusion is an injective funcion, and sometimes it's useful to think aboout it as a function.).
---
To add some confussion: Imagine that you buy in the supermarket a copy of Z that is green and another copy of Z that is red.
1(red) + 2 (red) = 3(red)
1(green) + 2(green) = 3(green)
(If I get super technical, I have to define a +(red) operation and a +(green) operation. And probably also a =(red) and =(green) as the article discuss.)
Are they the same Z or just canonicaly isomorph or it doen't matter?
---
To add even more confussion: You have a copy of abstract Z and a copy inside the rational written as n/1:
1 + 2 = 3
1/1 + 2/1 = 3/1
Are they the same Z or just canonicaly isomorph or it doen't matter?
The typical construction is just to show that they _can_ be constructed. Once you have "primitive" integers, rationals, ..., you pick (for this finite example) the largest of them, and define the "integers" to be a subset with a certain property. That's also well-defined, and you find _exactly_ the same integers in the rationals and reals.
I think the real funky business happens right between ℚ ⊂ ℝ
but maybe all I'm really trying to say is "hey, look at me, I understand how all of ℕ ⊂ ℤ ⊂ ℚ"
I'm very very sure that it's possible to give a fully ZF set-theoretic construction of the rationals using sets. in fact IMO the annoying thing is that there are in fact two trivially equivalent constructions that I can think of the top of my head, and I find that even worse than there being none at all
Traditionally, you define ℚ as equivalence classes of pairs of integers. The integer 0 is an integer, it’s not an “equivalence class of pairs of integers” and therefore it’s not a rational, in the set-theoretic sense.
so yes, to answer one of my own questions. I was only really saying that I don't really know any of those constructions; but know all the "preceding" ones.
I gotta figure out dedekind cuts or at least learn tarski's fixed point theorem which I'm sure that would allow me to much better understand tarski's construction
all in the slow lifelong process of understanding and learning to draw post's lattice
finally, to answer your question, I disagree with saying that there's funky business between ℤ ⊂ ℚ because of my alleged claim that there is at least one construction which avoids the problem you describe, but as I was trying to say, these would have a non-unique way to construct number zero which nobody likes
No funny notion of ⊂ at all, it is about how you construct ℚ, for example. You construct Q(ℤ), show that ℤ is embedded in Q(ℤ) via a monomorphism I, and then define ℚ as (Q(ℤ) \ I(ℤ)) ∪ ℤ. For example. You are right that nobody cares about how this works in detail. It does assume though some notion of ℚ and ℤ as collections, but not necessarily sets from ZFC.
Yes, but in a software like a theorem prover you must make the conscious decision whether you want your prover to be able to assume ℕ ⊂ ℤ or not. All the type theory based ones say "not". That is not very convenient.
> I think pretty much every mathematician agrees that […] 1 the integer is the same as 1 the real number
Yes and no.
NO: there already are multiple ways to construct “1 the natural number”, even if you restrict that to set-theoretic ones. See https://en.wikipedia.org/wiki/Set-theoretic_definition_of_na.... Von Neumann defined the natural number 1 as {∅} (the set with as only element the empty set), Frege and Russell as the equivalence class of all sets with a single element (and that’s not a circular definition)
Integers typically are constructed from those (See for example https://mathesis-online.com/integers, which defines an integer i as the equivalence class of pairs (a,b) of natural numbers such that a = b + i)
In this construction, “the integer 1” is the infinite set {(1,0},{2,1},{3,2},…}. That obviously (in the mathematical sense) is different from the one element set {∅}
YES: when you decide to ignore the low-level stuff and just use the intuitive definition of reals and integers.
(Alternatives:
- don’t ignore it, but start at the basis, calling the integers something else, then prove equivalence between a well-defined subset of the reals and the integers and then give that subset the name “integers”.
- directly construct the reals without first constructing the integers and then define the integers as a subset of them. I’m not aware of any way to define the reals without first defining integers, though.
)
You ellipsed ([...]) the "can assume" part of my statement. Every mathematician knows the different ways of constructing numbers. It really is a simple YES.
No, it isn’t. Whether it is depends on what level the mathematician operates, and will differ on what they’re working on. That’s precisely the subject of this article: there’s an intuitive idea that, upon close scrutiny, turns out to before complex than it appears to be.
So, if you want to start from the basis in a proof assistant, you’ll have to discriminate between them and either give them different names or introduce context from which the computer can infer what you mean.
If, for example, you want to prove that two definitions of integers are equivalent, you need two different names.
It helps to state carefully what you are trying to achieve. I want a theorem prover where constructing numbers such that ℕ ⊂ ℤ is easy. Doing this in a theorem prover is not trivial, for example none of the type based theorem provers allow that (although, Isabelle/ZF might qualify; it is based on set theory on top of a weak intuitionistic type theory; but here ℕ and ℤ are not types, but sets of the same type). It is trivial though if you are a mathematician, no matter what you are working on. You know that these constructions can be done, and that as a result you can assume ℕ ⊂ ℤ (if you want to). So as long as your theorem prover doesn't support something as trivial as that, it is lacking.
And yes, that is exactly what Buzzard is talking about. For the case I am talking about here though, that is just table stakes: make sure your prover can do ℕ ⊂ ℤ ⊂ ℝ ⊂ ℂ. If you can do that, you can start worrying about more advanced issues of equality, and you might find that you have already solved a significant amount of them.
I can confirm that in Isabelle/ZF one can set up a context (locale) with the meaning of the ℕ ℤ ℝ ℂ symbols defined so that ℕ ⊂ ℤ ⊂ ℝ ⊂ ℂ. However ℕ for example will not be equal in such case to the canonical set of ZF natural numbers where 1 = {0}, 2 = {0,1} etc.
I came across this topic in lean, where you have different definitions of 0 and 1 within the same context and you can auto-convert using coercion.
instance : One Hyper where
one := ⟨1, 0, 0, 0⟩
scoped notation "1" => One.one -- doesn't work "invalid atom" and unnecessary
instance : OfNat Bool 1 where
ofNat := true
instance : Coe ℤ Bool where
coe z := z ≠ 0
instance : Coe ℝ ℝ⋆ where
coe r := ⟨ r 0 0 0 ⟩
Basically all the fun of C++ casting just that you are always safe.
It is a different story in Lean though, because Lean, as most other logics based on type systems, doesn't have subtypes. So all you have are coercions. That's actually a bad thing.
That might be called Subtype, but is a disjoint type containing different values than the original type. If S is a subtype of T, and x : S, then also x : T.
Because coercions are just a work-around, and more machinery hidden from you, but still there causing problems (like types).
If you are coming from C++, then yes, Lean coercions are a step up. If you are coming from mathematics, then Lean coercions are just a cumbersome work-around.
Cumbersome compared to just having ℕ ⊂ ℤ, instead of having a coercion c from ℕ to ℤ, with complicated algorithms of where to insert c automatically, and then hiding the result from you, because otherwise you would see with your own eyes what a mess you are in.
But with coercions you do have {↑n | n ∈ ℕ} ⊂ ℤ, which is the subtype that you really want!
The issue is that not all properties of subsets can be lifted to supersets. As an example from analysis, you can coerce the reals ℝ to the extended reals ℝ∞, but you lose some of your rewrite rules because you need to choose how to define edge cases like (∞ - ∞) in ℝ∞. Making these choices is common in mathematics (at least in analysis) and in pencil and paper proofs it's usually just handwaved away.
Using subtypes, a proof checker would have to be explicitly given which ambient superset you're going to rewrite something like ((a : ℝ) - (b : ℝ)) in... this is more or less the same as using coercions to distinguish between ℝ and {↑r | r ∈ ℝ} ⊂ ℝ∞ except with subsets type inference is way harder.
I'll admit that coercions are annoying, but subsets don't let you get around it: here there's an essential step up in complexity between sloppy pencil and paper proofs and verified code. Personally, I'm really interested in seeing how much of mathematical handwaving can be rigorously automated away. There's a lot of cool work in this area by I don't think the FM community has a definitive answer for it yet.
Let's be clear, I am not arguing for subtypes. Subtypes are a mess, that is why no type system supports them. I am arguing against any types at all. So type inference being harder is not a problem, because there won't be any type inference. I know, that is a tough pill to swallow, type inference is like a hard drug for computer scientists.
Subsets or subcollections, on the other hand, are fine. Of course you will have proof obligations, but that is not a problem, and automation can take care of most of this (note that automation is much more flexibel than hardcoded type inference).
The (principle) cube root of the integer -1 is -1, but the (principle) cube root of the complex number -1 is (1 + sqrt 3) / 2. They sure seem like different objects.
Though I guess whether you want to blame that on the operators vs. the objects themselves can be left to your taste, but I'm not sure what "these objects are equivalent" would mean if the behaviors are left unspecified as characteristics of the objects (which would be the former case).
That is indeed a nice feature of types: You can overload notions. In this case you overload the notion "principal cube root" (PCR) to mean different things for ℤ and for ℂ.
But really, "principal cube root" as you would like it to behave is not well-defined just for a number, you also need to provide the algebraic structure you consider it in, as in PCR(ℤ, -1) = -1, and PCR(ℂ, -1) = (1 + sqrt 3) / 2.
Alternatively, just set PCR(-1) = (1 + sqrt 3) / 2. That makes probably the most sense, as there is not much value in a PCR notion for integers in the first place.
"Principal" root isn't an interesting mathematical object. It's just an arbitrary way to choose a preferred element from an equivalence class, when you're too lazy to consider the whole class and you want to pretend that 'implies" ("only if") is is the same as " biconditional" ('if and only if").
At least in my education, you first define the natural numbers (where 0 and 1 are special, and the rest are defined in terms of that (ie 2 = 1+1)), then you define negation, which gives you the negative numbers. Then you layer on multiplication and division, which gives you rational numbers, and so on.
So, it's the same "0 and 1" definition all the way through, just with additional operations being added to the mix.
The usual formal definition for the rational numbers is equivalence classes of pairs of integers. The zero is then the equivalence class of (0, 1) which is not the same as the integer 0.
You could certainly somehow get it to work by starting with the closure of the division operation but would introduce a lot of unnecessary headache along the way.
Complex numbers are just pairs (real and imaginary) of Cauchy sequences of pairs (numerator and denominator) of pairs (positive and negative) of nested versions of the empty set. (modulo al the necesary equivalence to make this work).
So the natural number {{}} is canonical included in the complex numbers as [something].
Natural {{}}
Integer ({{}},{})
Rational (({{}},{}),({{}},{}))
Real (({{}},{}),({{}},{})), (({{}},{}),({{}},{})), (({{}},{}),({{}},{})), ...
> Every mathamatician knows that the integer 1 and the real number 1 different objects [0].
This is quite interesting to me as I'm trying to understand, recreationally, these concepts.
From what you've written it looks like there are several ways to construct numbers. I'm aware of some of them but I don't know if they are connected or related e.g., Dedikend cut, Van Neumann numbers etc., It'd be great if you could list down all such constructions.
There are 2 main main approaches: explicit constructions and axiomatic constructions.
Dedikend cuts and Von Neuman numbers are both examples of explicit constructions. You can point to exactly what a given number "is", and then prove properties about you construction.
The other approach is the axiomatic construction, where you define what properties you want your numbers to have. You then (hopefully) prove that such a system exists and is, in some sense unique.
Taking the integers, the most common explicit construction is the set-theoretic Von Neumon construction, where 0={}, 1={0}, 2={0,1}, 3={0,1,2}...
You can also imagine defing 0={}, 1={0}, 2={1}, 3={2}
Note that while we say these are equivalent definitions, they produce different results in the sense that, for example, 0∈3 in the former, but not in the latter. [0].
You can also define natural numbers as a subset of the cardinals. In this construction, 3 represents the collection of all sets that can be put in a 1-to-1 correspondance to the set { {}, {{}}, {{{}}} }. Here, the exlusion of various infinities is a rather arbitrary part of the definition.
You can also define integers as a subset of combinatorical games, given by 0={|}, 1={0|}, 2={1|}. This naturally leads to negative numbers, -1={|0}, and a zoo of "numbers" not seen outside of combinatorical game theory, such as ↑ = {0|{0|0}}. Still, if you restrict yourself to the integer looking numbers and follow the game theory rules of addition, you get something equivalent to the natural numbers.
You can also define them as strings constructed from the alphabet of a single character, so 0="", 1="a", 2="aa". (This is the free monoid over a single set)
Lambda Calculus gives us the church encoding, where:
* 0 = λf.λx.x
* 1 = λf.λx.f x
* 2 = λf.λx.f (f x)
Under the axiomatic definition of numbers, the classical definition is given by the 9 Peano axioms, which amount to assuming the existence of 0 and a "succesor" function S(n) informally corresponding to S(n) = n+1.
All of the above examples are things I have actually seen and that arise naturally in their respective branch of mathametics.
[0] This contradiction is not actually an issue, because no one ever actually says 0∈3, even though it is technically true in some constructions.
We must remember that these constructions retcon the actual human experience with numbers, which is firmly rooted in counting and measurement, that always come after this experiential basis. Formal definitions appear driven by an aesthetic consensus, mostly about minimalism and axiom flavor. Individual human experience with numbers reflects human history and at both scales 'counting and measurement' is more fundamental in the sense that it always comes first, and therefore all constructions derive from it, must maintain consistency with it, and conflicts must be resolved within it, and it is safe to ignore inconsistencies that can't be expressed within it.
Without formal constructions, humans have a nasty habit of proving things that wrong or not even wrong. Our ancestors dealt with this problem in the 19th Century by creating tools to help distinguish truth from balderdash.
Your statement is consistent with my point which I mean to point out the psycho-social origin of such constructions and therefore the limits of their utility. I do not say they are not useful at all.
In those terms, the foundations (of any topic or field) are never fundamental. I think people in this thread are using a different meaning of "fundamental".
These are all constructions though. They don't say what the natural numbers are.
The natural numbers is a set with a starting element and that has, for each element, a thing immediately following.
Things like 0 in 3 make no sense since 0 refers to the first thing in whatever representation you have, not a particular one.
I love formal theorem proving and provers, but this shouldnt lead us to believe that the natural numbers are fundamentally any particular representation
N is defined by its properties not its representation.
> These are all constructions though. They don't say what the natural numbers are.
A construction is exactly what would tell you what they are.
> The natural numbers is a set with a starting element and that has, for each element, a thing immediately following.
OK, but, like, which one? There are lots of sets that this describes.
> Things like 0 in 3 make no sense since 0 refers to the first thing in whatever representation you have, not a particular one.
If 0 and 3 are sets then we can legitimately ask whether 0 is a member of 3 or not. If 0 and 3 aren't sets then what are they? If the natural numbers are a set then 0 and 3 should also be sets, since set theory only talks about sets.
> N is defined by its properties not its representation.
Then what is it? If you say it's a set with those properties, then it should be a set and we can ask set questions about it. If you say it's somehow all of the sets with those properties, or some other concept that somehow encompasses them, then you still haven't explained what it actually is.
If I choose to wear a fedora, is a fedora an essential part of who I am? Or an arbitrary choice that be changed?
This question goes back to Plato.
You use Von Neuman naturals. I use Peano naturals. Which are the real naturals? Can they both be the naturals, even though they are clearly different?
The natural numbers are not any one construction. They are the abstract idea of whose properties are what all the constructiona have in common.
Math is like God. We use feeble human language to approximately talk about something that is beyond our ability.
> Math is like God. We use feeble human language to approximately talk about something that is beyond our ability.
That sounds like giving up. The approach the article is describing is a lot more satisfying - forming this concept of equivalence, and then using it to give genuine, rigorous answers to questions like:
> You use Von Neuman naturals. I use Peano naturals. Which are the real naturals? Can they both be the naturals, even though they are clearly different?
> OK, but, like, which one? There are lots of sets that this describes.
Any, and all. One can do the thinking without specifying what. I believe this is the notion of a 'universal property' in category theory.
> If 0 and 3 aren't sets then what are they?
I mean, if you want to give me a construct I would use, I'd say the following
0 = \s -> \z -> z
3 = \s -> \z -> s s s z
If you say that these are akin to church numerals, that is correct if you look at it as a lambda calc term. However I'm saying something much stronger than just a lambda calculus term. That's just my notation.
I'm saying that zero is the concept of taking two things (one an initial thing and the other a method to take one thing and get the next thing) and then returning the initial thing. While three is the concept of taking the same two things and then doing the method three times on the first thing. I used lambda notation because that is an easy way to convey the idea, not because that is what three is.
> Then what is it? If you say it's a set with those properties, then it should be a set and we can ask set questions about it. If you say it's somehow all of the sets with those properties, or some other concept that somehow encompasses them, then you still haven't explained what it actually is.
It's a set in the colloquial sense, not in the ZFC sense.
There's an idea of a thing, and then the definition of the thing. Both are valid in my mind.
For example, heat is the movement of atoms. It's also the energy in a thing. But it's also the feeling of hot that I cannot describe. It's also an idea in its own right. Things can be many things. There's an infinite hierarchy of description that one may apply to concepts, and, in my philosophy, at some level the description becomes unable to be expressed.
> One can do the thinking without specifying what.
After one has figured out a rigorous approach, one can generally ignore it in day-to-day work. But you need to have that foundation there or you can go horribly wrong. (Analogy: we don't generally think about limits when doing day-to-day calculus. But when people tried to do calculus pre-Weierstrass, unless they were geniuses they "proved" all sorts of nonsense and got crazy results).
> I believe this is the notion of a 'universal property' in category theory.
That's one of the good answers to this problem. But you still need the constructions (if only to prove that an object with the required properties exists at all) and you need a lot of the kind of work the article is talking about, where you establish that these different constructions are equivalent and you can lift all your theorems along those equivalences.
> I'm saying that zero is the concept of taking two things (one an initial thing and the other a method to take one thing and get the next thing) and then returning the initial thing. While three is the concept of taking the same two things and then doing the method three times on the first thing. I used lambda notation because that is an easy way to convey the idea, not because that is what three is.
OK but then what kind of things are these "concepts"? What kind of questions can and can't we ask about them? If you care about "what the natural numbers are" then these kind of questions are worth asking.
Wikipedia has a lot of resources. Naturally, it won’t all be accessible (some of these constructions sit at the top of a mountain, and you have to climb the mountain first).
> ℤ ⊂ ℝ
That is not at all obvious without some canonical isomorphism acting on ℤ, or at least is a behaviour strictly dependent on your particular set theoretic construction of the reals.
E.g. for the cauchy sequence construction of the reals, the image of the integer 1 under this canonical isomorphism would be the equivalence class of the sequence (1,0,0,0,...), but these are of course not the same object in a set theoretic sense.
Trouble arises in any logical system when we start to make statements on the system itself. As long as there's a strict separation between the target of your logic and the logic itself, you can make it all work out, but recursion is what kills you. To avoid this, many thinkers have come up with a sort of infinite tower of sorts. For example, in ZFC set theory, sets are ordered in cumulative ranks. In theorem proving languages like Coq, Agda, etc, there are a hierarchy of universes. Even Aristotle's four causes can be seen to be a small subset of the infinite hierarchy of cause. It's just the nature of logic.
Similarly, there is a hierarchy of equality at increasing abstraction levels. There's naive equality which is that two things are literally the same. Then there's equality as to the 'class' of objects, and we can even go beyond that talking about equality of equalities (another word for analogies). Most stop there, but you can go on and on and on.
I'm currently interested in homotopy type theory systematizes this notion of varying 'levels' of equality, but it too has issues. It's an open question.
The article does sensationalize a bit because math is just a method of discourse. Equality means different things to different fields, and nothing about this kind of equality means that previous results are wrong.
Homotopy type theory isn’t magic, but it nicely explores the difference between (strict) equality and context dependent equivalence. (Okay, technically that’s univalence.)
I think HoTT nicely capstones the idea algebra and geometry are equivalent by pointing out that semantics itself is defined by the topology of categorical diagrams — and anywhere we find compatible topology is equivalent. Neat.
In the first year of the University you write cpngruence with three horizontal lines. In the second, you just use two lines, aka an equal sign. Everyone knows = is overloaded.
The same happens with other signs, like the arrow of limit. In many cases it's confusing so you add a small note to the arrow, unless it's obvious (whatever that means).
When I took Dan Freed's algebraic topology course, one time he asked us if two spaces were the same. Then he looked at us for a moment and said that actually that was a "Bill Clinton question: the answer depends on what the meaning of the word is is".
Maybe I'm totally lost, but this thread of self-replies makes me suspect a Markov chain generated them or something like that. The same is true for some—but not all—of your other comments, which feels really weird to me. Of course, maybe I'm way off base, and what you're saying makes total sense and is just going over my head.
I wonder if dang (or another mod) has already seen this....
I’ve also noticed this with vinnyvichy and 082349872349872 (who mostly seem to respond to each other). Probably it should be reported to dang if it hasn’t been already.
(The first time I saw it, I wondered if I was going insane, because the conversations seemed unrelated to the topic and nonsensical in themselves… good to know there’s nothing I’m missing!)
It's a bit like an abacus: the motion of beads on each individual rod may not seem to make much sense, but there is a meaning to the calculation taken as a whole.
(Hmm: presentation does a great deal for providing meaning; imagine a "write-only" tetris game where instead of presenting the current state of the playfield, completed rows stayed on the display and dropped pieces stretched through them to incomplete rows. I think that'd be much more difficult than the traditional view)
"Tit for Tat" only requires storing one bit, for the last interaction. (its robust-to-noise variant, "Tit for Two Tats" still only requires constant memory)
(I think I've talked about bufferbloat on HN before, but my Algolia-fu is lacking today) In this particular parent, I'm not referring to bufferbloat, but rather the mechanism "chipping", by which spread-spectrum communication can hide near, or even below, what more traditional signal processing might consider the "noise floor".
Jiang Ziya gets a good start on the idea in §25: "Secret Letters" of the "Dragon Way" (httphttps://zh.wikisource.org/wiki/六韜#陰書第二十五) in which he suggests sending sensitive messages by sending 1/3 of their characters each over three different channels.
The birds (鸟语) use frequency sweeps (chirps) to provide noise protection: most noises either occur across nearly all frequencies but are localised in time (a "pop") or occur across long stretches of time but are localised in frequency (a "hum"); by changing frequency with time one minimises interferences with either of those noise sources.
Finally, frequency-hopping proper involves sending multiple chips for each bit, on a nearly uncorrelated (prng-drived) schedule of frequencies.
(because we have sign as well as frequency, even on a single frequency swapping between sign conventions allows one to extract signal from substantial amounts of noise. A Dicke switch would be the scientific method in microcosm, hopping relatively quickly between signal and control)
EDIT: Mao Basic Tactics (1937) mentions multiple redundant messaging, but only as a repetition code, not even as fancy as Jiang's scheme. [Mao evidently had the problem of composing fighting forces capable of independent action with peasants who could read fewer than 20 characters! Yikes! (I mean, not that I'm much better, but I'd guess by analogy with kanji I probably have 20 hanzi already just in passing?)]
vinnyvichy, any chance your baidu-fu would be enough better than mine to find either that or On Guerrilla Warfare (1937) in the original hanzi?
Thanks; that's 1938* 《抗日游击战争的战略问题》but I'd been hoping for the 1937 work,《游击战》, which even in english only seems to appear in various US translations as well as a "collected works" that was printed in India, but is not canonical according to China?
When I try baidu-ing I get plenty of hits for Guevara's title but none for Mao :(
* I think en.wikipedia.org is just plain wrong here but maybe they consider them two editions of the "same" work?
Yeah, Griffith's translation is what I keep turning up, no matter what character set; maybe if I ask Fort Meade really nicely "The Dragon Lady" would be willing to samizdat me their version of the original?
I don't doubt that it's an accurate translation (Griffith retired from the USMC and did a doctorate in chinese military history before[0] attempting it); I'm just curious[1] how various phrases from the translation were phrased in the original.
[1] the only other chinese text I've looked closely at has been bits of the Mawangdui Tao Te Ching, and I'd guess using that as a reference style would be like attempting to write contemporary english based on the King James bible.
I might be misreading the signal in your messaging as of your frustration that 1938 text
I posted ("//..banned-...") isn't the original source of Griffith translation
Here is the first para:
"In a war of revolutionary character, guerrilla operations are a necessary part. This is
particularly true in war waged for the emancipation of a people who inhabit a vast nation.
China is such a nation, a nation whose techniques are undeveloped and whose communications
are poor. She finds herself confronted with a strong and victorious Japanese imperialism. Under
these circumstances, the development of the type of guerrilla warfare characterized by the
quality of mass is both necessary and natural. This warfare must be developed to an unprecedented degree and it must co‑ordinate with the operations of our regular armies. If we
fail to do this, we will find it difficult to defeat the enemy. "
Okay you might rue that it isn't a scan, but I doubt the hanzi isn't a word for word transcription of the original. Unless.. you mean you want the traditional hanzi?
The one translation I find questionable is of the phrase 民众性 to "quality of mass", maybe appealing to Clausewitz?
EDIT: in fact if you click on 对比 you get scans of the original, but it crashed my mobile browser lol..
I'm ruing that we have 1938 in both languages, but 1937 only in english translation (although I have every reason to believe that translation has, when choosing between fidelity and beauty, opted for the former), and I don't believe 1937 and 1938 are the same composition (or at least if 1938 is an expanded 1937 it also clearly has deletions).
[traditional vs simplified is not an issue here, although I note that 丑 (note there are sibboleths even in the simplified strokes!) as simplified 醜 loses a potential flight of fancy: does the existence of unitary ghosts imply that subghosts and ghost-quotients are two different structures (in general: for abelian hauntings they coincide?), for the latter may be non-unital? This would have important repercussions for the kernels of mappings between ghosts...]
Hard to do this on a mobile phone so maybe point me to suspected insertions/deletions in the 1938 translation as you seem to already have a list of them.
Indeed I would have most preferred "spirits & spectres" because that one has a neutral connotation in Chinese.
The referent of your 三殺法, "kill the buddha-spirit, kill your demised-forebears"《临济录》(written back when Taoism had not been fully differentiated from Buddhism) was way more suicidal in ideation..
What are the various connotations of gods, ghosts, spirits, and spectres in Chinese?
(Continue the series: 天 大 人 小 ??)
Come to think of it, "God is Dead" is true for both christians and Nietzsche's madman: as Gautama's charioteer explains, living things are subject to age, disease, and death; the christian god is subject to none of these, therefore He –not being quick— is dead.
On a Blindsight tangent, that's exactly what "techniques will occur when a void is found" was about: the passive is on purpose, encoding the philosophical zombieism of martial zen.
The conscious mind is useful (like s/w for resolving a TLB miss) for calculating useful things to have in the cache, but in an antagonistic situation, one needs to have a high percentage of h/w-served hits.
If one consciously decides to employ a technique, one will certainly not surprise oneself, and one will probably not (unless one's heart is very spherical indeed?) surprise an opponent who is practicing the 3rd of the "3 pwns"*.
(indeed, I find the evolutionary just-so for consciousness to arise from the 3rd pwn: once one has the grey matter in place to model others, modelling oneself would be a natural side effect)
[The sixth patriarch happened to be passing by. He commented: "Neither the archer nor the target is in motion; mind is in motion".]
Dx: There are more things in Heaven and Earth than allow a small (speak not of the 10'000 things) finite presentation: ignoramus et ignoramibus (as our light-cone expands we always encounter new information on its boundary)
Rx: Sheaves allow one to synthesise globalitites by coordinating handfuls of localities.
Px: The hacker will move amongst essential entropy as a fish (whale?) swims in the sea.
EDIT: the ironic (Kutznets哈哈)thing about Mongols, they overwhelmed the other barbarians with advanced tech (gunpowder, siege engines, biowarfare) but now we admire their neolithic technik.
I find it plausible, though, that compound bows might've been invented in the neolithic (assuming one already has the adhesive technology to haft spear points), but never became popular until the bronze-age development of horseback riding because until then having a smaller bow (that was both heavier and took over 5x as long to make) was little advantage for a lot of disadvantage.
Serovo is just before 5000 YBP (3000 BC), and locally late neolithic, but that's because Siberia has no chalcolithic, so neolithic gets directly succeeded by the bronze age. Given the uncertainty of dating back then (some people believe riding had already been invented, others that it would not be for some time after), I'm happy to consider those as possible or even likely compound bows.
Sorry for leading us in circles with the pedantry!
No prob, that led to my exploration of whether horses might have been used in the area around that time. Maybe you want to be a bit more killer to improve communication?
As for the chipping stuff, I'm quite certain (despite your PhD????) gaussian noise has been assumed somewhere, so right now I'm searching around for serious attempts (not just proposals) to try to use, say, squeezing, to get below the shannon(-hartley?) limit.
I had the choice of starting with Geroch or Bamberg+Sternberg and I went with the cleaner notation and crappier words, might not have been prudent.
Okay maybe you weren't as familiar with french academic activism as fencing (although maybe you might still ask your local experts about recent rule changes with respect to say Korean dominance in sabre -- curious about what is it that the Koreans know that is so unteachable, plus, people getting upset might be a good sign)
Point was, if Murderous Saint Ted was raised a physicist, he might have found socially acceptable ways to get his messaging out ( I don't know, preaching to pliant students?) buuuuut I would think that's still suboptimal.
Botai culture was before, but whether it was close enough to be in the area and whether they rode as well as ate their equines are both open q's.
Interesting, I had been talking about chipping not in the quantum regime, but in the classical[0] and metaphorical[1] regimes.
Sorry, I had thought I had the referent to be J-Y G, as I'm currently in the middle of The Blind Spot[2] and enjoying how he'll discuss something foundational in mathematical logic one moment and the next he'll make a social or political observation. (as does Körner, but Girard is more gallic and more so)
泰德干公 (am I doing this right?) might've done better as a physicist, but (unlike, say, Pol Pot) thankfully he combined revolutionary[3] fervour with a human-organisational capability unlikely to have been up to running a church bake sale[4].
Finally, I really enjoyed 187:
> in making such appeal care should be taken to avoid misrepresenting the truth or doing anything else that would destroy the intellectual respectability of the ideology.
I mean, nothing says intellectual respectability like claiming "I needed to do a few small bad things (like killing people) in order to publish my manifesto in which I worry about big good things (like upholding copyright)"[5]
[1] in which Algolia, rather than high dot products, allow us to reconstitute a more-or-less linear convo
[2] I'm not very convinced by the circularity argument, because if I wish to ground out boolean logic, I'm happy to either present either a truth table, or if I must, just define `∧` to be the symmetric, idempotent binary operator that tends to falsity, and so on... However, I am intrigued by his program to escape to the geometric (or at least topologic) from the algebraic, in an attempt to make the equivalent classes of proofs more explicitly syntactic and less implicitly semantic.
[3] TIL that "revisionist" may not (etymologically) have meant that someone had gone off-commie-message, but that they had gone off-revolutionary-message, in that they believed it might be possible, by making a series of incremental revisions, to accomplish the stated goals of The Revolution without bothering with all the unpleasantness that tends to accompany actual revolutions. In this sense I suppose I am both an earnest Bumble-Puppy[6] advocate and a card-carrying capital-R Revisionist.
[4] even on personal, not organisational, axes there are lacunae in dealing with volunteers; eg (exactly like certain segments of HN) Ted/"FC" advocates large family sizes for others, despite failing to adequately demonstrate any personal reproductive success.
[5] see his note to note 16.
[6] He was at least consistent in choosing to emulate "When Adam delved, and Eve(cf [4]) span" (although aren't postal services post-technological?), but I know quite a few people who have both pre-technological and "surrogate" interests, and while all of us enjoy visiting the pre-technological, we've all opted for living in the "surrogate" world.
There are many distracting falsities in "Industrial Society and Its Future", but I think the overarching problem is that Kaczynski had not watched enough "My Little Pony" to realise that his inability to develop his cutie mark[0] probably had more to do with the power process in Ted K[1] than with the "Power Process in Modern Society"?
[0] come to think of it, some accomplished entomologists have even managed to become widely known in the small circles of literature as well...
It's gonna take me a while to precisely measure the gap of our disagreement on strategy (or basic tactics!) but note that fanatics, possibly only the reproductively talented, of Ms Bechdel, ship this nonironically
I actually saw this non-ironically somewhere (The View From 80 perhaps?) in which it was suggested that, after a certain age, waking up with pain is a positive sign, with the rationale that if some part of your body doesn't hurt, maybe you didn't really wake up after all...
"Naughty" would rather be "anziehende" for Cantor; note Shakespeare, Much Ado About Nothing.
EDIT: yet another GrIneq! Any motivation for it? Max Cut clearly has applications in fratricidal hardware (see Bombers, Strategic) and it's also dual use (see https://news.ycombinator.com/item?id=40701284 ) in that pulling wholes apart (into parts which may themselves
be amenable to decomposition) is the analysis step which (in general position) precedes synthesis.
I'm guessing it's about the general rule of thumb that hw is faster than sw, and so can react to urgent situations in a timely fashion. As for why sw might be shorthand for consciousness: sw can modify itself (and thus react to strategic issues)
... so yeah, hw is more or less defined as something that cannot modify itself.
We're altricial, not precocial, so over our lifetimes we can (and hopefully do) alter many of our own "reflex" behaviours. In particular, details of the 4 F's are firmware configurable.
Also, there's lots of potential for failure (and I'd say it is overwhelmingly observable that we have and do) in "can (and hopefully do)". Who knows what our absolute capabilities are, especially if we hardly or maybe even never try (at least certain things)?
Writing the capabilities to firmware is a step after this, but only after the first step is accomplished.
Glad we moved the discussion into the Overton window of firmware.
Maybe it's too early to start shipping metaphors in wetware, but developing general heuristics for productive+reproducible+survivable self-fuzzing? (*)
"n X's" might still be a useful template to keep in mind, yes?
Here are some good resources to explore the pain-gain frontier. (Or its baby versions, the supplies-surprise dilemma, shiprate quantification, etc.)
[0] caveat lector: "nature physics" is where economics that's almost certain to be rejected for very good proboscidean-tail-wiggling reasons would be dumped.
(*) Going back to the "Blindsight tangent", the social-political plane might already be an ideal platform to begin quantitative experimentation. ("Bylaws as wetware")
>Who knows what our absolute capabilities are, especially if we hardly or maybe even never try (at least certain things)?
Ego quoque, for misunderstanding what you meant by "certain things" (>)
I stress that I manifestly do not want to see physicists and tech bros publish bio-meta-how-tos just yet, did you mean high quality professionally conducted BSL training?
As for 082349872349872, I think exploring the Taoist metaphysics of marginally injurious mutual surprise as an epsilon over Buddhist tainted reinterpretations could be at most a waste of time.
And what's more, I can vouch that he's not a physicist, not even vicariously.
>though an ideology taking root in the mind is necessary to exploit its capabilities to their maximum.
Upon reflection, I do share the physicists' annoying-to-everyone-else habit of believing I can take subjects upon which books worth of ink have been spilled, do some Procrustean projection, and address them (to zeroth order, anyway?) reasonably with dozens of symbols.
Physics is almost entirely deterministic (the exception being quantum mechanics, which seems just short of it).
Metaphysics on the other hand seems to not only be not deterministic, it is often the opposite of it (at least at the level we can discern, though there is no consensus on levels or perspectives).
> address them...reasonably
Is there an implicit epistemic assertion here, or are you using these terms colloquially (as most physicists / scientists do in my experience)?
defo colloquially, because (in the tradition of M Jourdain) I'm ignorant of how one might even make an epistemic assertion thus?
When I say "[I] address subjects reasonably", I guess I'm trying to implicitly communicate a few things:
(a) if I gave you a model of a large rock, and you find something physical that can be mapped onto what I claim to be a large rock, and thereupon proceed to kick it, then no matter what your (or my) epistemic views "reasonably" suggests the model predicts that your foot will hurt before long.
(b) this model is continuous: the "easier" it is to map your bundle of sense-perceptions onto my ideal rock, the "better" my theoretical prediction of pedal pain will hold up in practice.
(c) pace James, "you get what you pay for": given that it took you seconds or minutes to understand the cheap and cheerful model, don't expect it to make better predictions than another model that takes days or years to understand.
> I'm ignorant of how one might even make an epistemic assertion thus?
In this case: click inside white rectangle box, push some buttons on keyboard to generate a series of symbols, then click on rectangle with symbol "reply" on it. Easy peasy, so much so it is almost intuitive, or sub-perceptual.
> if I gave you a model of a large rock
Westerners (I presume?) and literally every single modern day ~philospher I've met do love their always physical based analogies!
Is red really red? Is a chair really a chair? Great stuff, surely a catalyst for great revolution (any day now).
For fun, now do "Coup d'état, at the object level, in an epistemically sound manner (in case not obvious: this is a reference to the Jan 6 mass psychological spectacle).
Or: "COVID", or "[The] Science", or "Democracy" (our Most Sacred Institution).
Say we wish to encourage, at best type 2 thinking, and at least confidence in the ability of others to use type 2 thinking: could we come up with simple models, well within the compass of a habitual type 1 thinker (and applicable to what they consider problems), but critically involving a single inference? If so, these might be the equivalent of gimme caps in that practicality leads to ubiquity leads to —at least with regard to simple argument— improved ability?
> In the case of vertebrates, this list corresponds to the motivational behaviours that drive the activity in the hypothalamus, namely: fighting, fleeing, feeding and sexual functioning. The hypothalamus responds to these motivations by regulating activity in the endocrine system to release hormones to alter the behaviour of the animal.
A powerful force, but we have some ability to transcend such things.
It is amazing how dumb studying science makes people. Even Curt (who seems to have lots of depth in philosophy) has fallen victim to it: "Physics is understanding what reality is" lol.
> but critically involving a single inference?
I was with you up to this point....I feel like you're hinting at something here? :)
> Ego quoque, for misunderstanding what you meant by "certain things" (>)
> I stress that I manifestly do not want to see physicists and tech bros publish bio-meta-how-tos just yet, did you mean high quality professionally conducted BSL training?
How to say....more like curiosity about, and the pursuit of correctness (or at least, accurate representation) of affairs within the metaphysical realm. Something like "Enlightenment 2.0 - The Final Chapter" lol
> I think exploring the Taoist metaphysics of marginally injurious mutual surprise as an epsilon over Buddhist tainted reinterpretations could be at most a waste of time.
As a Taoist, I am very interested in whether you're interested in the truth value of this belief (and if so, to what degree). (Or were you perhaps using ambiguous language deliberately, for fun: "could be" could be a reference to a subset of the whole, or the whole itself....which did you mean? edit: "at most" seems to imply the whole?)
> And what's more, I can vouch that he's not a physicist, not even vicariously.
Can I trust you though?
> Okay so give me an example of an interesting ideology, as opposed to merely a conspiracy theory, such as "applying skepticism to your own beliefs" )
Ok: I am interested in ~"Non-binary, Comprehensive Truth". Not only is it interesting (to me), it is essentially blasphemous in Western Culture, 2024, so much so that I suspect it may lead to me getting banned from this forum some day (I've certainly been given fair warning at least, so "no one to blame but myself").
> as opposed to merely a conspiracy theory, such as "applying skepticism to your own beliefs"
As a Conspiracy Theorist and a Pedant, I lol'd. What a great start to the day!!
> I could hit the News, but I am interested in how you think.
Me too! I am extremely curious about what "thinking" is". I suspect it is quite other than what it seems to be, and is told to be (pardon the redundancy).
> E.g. did I misunderstand that you are interested in locating the precise quantitative boundaries of Systems I vs II?
I take a more vague perspective: I am interested in the nature of the distinction, and ways we could optimize it (individually and collectively). If you think about it, our current era (in the Blinded by Science and destroying our ecosystem as a consequence Western World at least) is fascinating: one one hand the humans have extensive knowledge that thinking is what separates us from other (known) life forms on the planet, and yet we pay it essentially no mind (at least at the public level...I suspect what goes on behind closed doors, perhaps within various Three Letter Agencies, perhaps elsewhere, is very different. But then that sort of thing is a conspiracy theory, and we all know what that means epistemically). I often what would happen if we were to do otherwise. But how to accomplish such a thing? How to even desire to try?
> Our (individual) absolute capabilities or our (collective) absolute capabilities?
Both, though the two are necessarily and intimately intermingled, and more so every day (except for exceptions, like for example if the United States government is successful in putting down TikTok, because {silly lies that easily fool child-like, conditioned since birth minds}). I'm thinking of it from a possible worlds perspective, so "capabilities" gets a bit complex due to ~multidimensional & temporal dependencies, but hopefully you get my drift.
I hope The Normies are enjoying our "word salad". ;)
If I get your drift, I think a precondition is that we need to figure out a way to keep the Bartle Taxonomy Killers (♣) happy just playing power-{possession/craving} games with each other, instead of feeling the need to drag everyone else into the Sith spirit; compare https://news.ycombinator.com/item?id=40019115
My sub-half-hour but supra-half-min take is that official non-sanctioning of bugs/exploits might be the interesting protomechanism to impede the evolution of granfalloony (vs karasserai)
System II & System I shades to be explored when cycles freed, but my basic thesis is that the bug discoverer/onlookers are trained to quickly direct reflexivity towards in-game rewards rather than towards tribal/purity considerations?
Post-mannichean millinery?
EDIT: which fencing weapon is most conducive to a healthy democracy?
> Unlike sequential games, such as tennis and table tennis, fencing is a type of simultaneous game.
One way to turn fencing sequential is to view "initiative" as the equivalent of a ball: generally either one player or the other will have possession at any given time. (given enough distance, then neither has the initiative, just like how in a field sports it's possible for neither to have possession)
The analogy falls down because fencing is more symmetric: losing possession of the ball near the goal you're attacking isn't bad compared to losing it near the goal you're defending, but in fencing losing initiative when you've nearly completed an attack, when distance has collapsed, is much worse than losing it near the start of an attack. (explaining why in both épée and kendō stronger players invite attacks)
Unfortunately determining who has initiative from the outside requires experience — and as the invited attack shows, it's very possible to believe you have initiative when you're actually being shaped by your opponent.
(the latter explaining why a king might wish to just "face south" and let all the ambitious people of the kingdom focus on modelling and exploiting his ministers instead of him)
When I have the initiative, I am threatening in such a way that you have to respond (counter or neutralise) to what I'm doing or you will get hit. When you have the initiative, I have to respond. (and, as in chess, the equivalent of pins and forks set up situations in which even when you make a good response I've gained something without losing initiative)
If neither of us threatens the other, neither of us has initiative.
"Right of way" formalises the concept, making it easier to teach, and easier to adjudicate: unlike the subjectivity of initiative, there are clear ways to gain and lose right of way, and an outside umpire can (and does) say who has right of way at each moment.
In theory, they're the same: what one has to do to gain and keep right of way is exactly what one should "by the book" do to gain and keep initiative. In practice, they have a large overlap, but it's possible to make actions that would gain right of way but not effectively seize initiative, and vice versa, it's possible to gain initiative without first taking what would —with a conventional weapon- be right of way, or keep it without first re-acquiring right of way.
Does that make sense?
EDIT: first impressions of the paper:
Cold war? They probably mean "Napoleonic" at the latest.
33 ms video frames are too slow. 40 ms is the "lockout" (time during which hits are counted as simultaneous) in épée, in my experience, this corresponds to about 10cm worth of distance (probably greater for more competitive fencers) and until one learns how to construct a point to set up a clean score, it's astonishingly easy for phrases to end in double hits — 1 frame of the video. And sabre is a faster game than épée.
> The proper use of tactics made up for [Szatmari's] lack of ability in this area, making it difficult to find a very effective way to beat him, so he won the championship at last.
I have heard it claimed that we have much less of a doping problem than other sports, because the sorts of drugs that increase physical ability (speed, endurance) tend to blunt mental capacity (choice and deployment of tactics).
The paper gives one story of Szatmari/Gu; here's a possible other story: (compare with the alien vessel in Blindsight) in the first part of the match, Szatmari explores, sacrificing some points to Gu to (a) take the edge off Gu's energy reserves, (b) learn exactly where Gu's strongest attacks go and how to trigger them, and possibly even (c) feed Gu disinformation about Szatmari's strongest actions; and in the second half of the match, he exploits, more than making up for whatever points he lost to get that information (or, if he did so, perform that shaping). [But note this is a very épée-tinged alternative telling]
Next time I stop by my old club I'll have to ask the fencing masters what they think about the ideas in this paper — it's very much outside the prisms the french tradition uses to analyse bout performance.
Alright seeing that I have to do some major unrelated symbol-pushing in the toplevel today... I might have to take a few days to process what has been said here. I know folks do not mind, but thought a heads-up would be in order. Praying to the gods of HN to keep this thread alive )*
EDIT: the delvings into ancient "Taoist" texts, I would do so myself, if I didn't feel that my models have been heavily polluted by god-knows-what. Those conspiracy theories are very compelling. Maybe a search for the actual battle records of 姜子牙 is in order)[0]
*As a Procrustist or Protist lol
[0] https://m.thepaper.cn/newsDetail_forward_24715199 says (likely CCP propaganda, but sometimes fun to read)
史记 should be the first cache to look up. (The rest of that has mostly been quoted to me, but I iterate to motivate mistermann's possible context updates)
I concur that HN is a liminal superspace whose character is slippery and so fit for rummagings in the murk, dang is a Taoist best sovereign whose existence is only dimly felt*, etc etc but all I really want to for now is to uh serialize something bothering me since mistermann replied.. quite localized to procrustean-physicism...
maybe physicists are hedgehogs who imagine themselves foxes, but then it occurred to me that it might be true that most non-French non-logician mathematicians tend to stay away from wider sociopolitical commentary (unless prodded)
I can think of reasons, but some implications are clear?
1. Kolmogorov thought of himself as more of a mathematician than a physicist.
2. Arnol'd was heavily frenchified, but had to rationalize that. (In particular, his students didn't inherit his attitudes?)
3. Galois was pretty normal.
4. Grothendieck was special but found a way to resolve the inner and outer (cultural) game.
5. Kaczynski.. too much of a logician? (Dare I suggest he was special XD?)
Re: Kaczynski, I think he stopped being an analyst in 1969, and switched to being a non-mathematician prepper?
(and usually analysis is pretty far from logic in various people's maps of mathematics, but here's an exception, in which the "plains of analysis" [sic] are right next to the "ocean of logic": https://utenti.quipo.it/base5/scuola/mate_metafor_mathematis... )
(unfortunately going in the other direction, as with Yuri and Tonya Zhivago, I believe Vladimir Vysotsky might've done better —at least healthwise if not artistically— to have spent much more time in Paris with Marina)
I wonder to what degree southern US charismatic christianity owes its peculiarities (glossolalia, etc.) to proximity to whatever practices predated vodoun?
[When I was young, we were supposed to "leave room for the Holy Spirit" at dances. That Holy Spirit sure got around, because even on the other side of The Wall, the godless commie young pioneers were supposed to leave 50cm for it: https://avatars.dzeninfra.ru/get-zen_doc/1866022/pub_63343bd... ]
Also, the Russian version of glossolalia, if it existed/exists in Russian-America, would be interesting/also-relevant, weren't they a minor splinter of the great Schism/some spending a lot of time east of the Urals?
> Theoretical physics is done by physicists who lack the necessary skills to do real experiments; mathematical physics is done by mathematicians who lack the necessary skills to do real mathematics.
I owe my switch from being a "systems" informatician to a "theory" informatician exactly to a mathematical physicist, so I'm glad there are some people who don't find it too much of a burden to work with the mathematically immature: https://press.uchicago.edu/ucp/books/book/chicago/M/bo415803...
(I'm guessing the "Algebra of Observables" is why JvN seems sometimes to crop up under every other stone I turn in semigroups?)
(Have I mentioned I'm very disappointed that «Иван Васильевич меняет профессию» hides exactly how Ivan Grozny crosses himself; how are we supposed to decide if he's an Old Believer or not? I also still have to watch that «Ku» «Kyu» one sometime...)
As one of the high school geometers of all time, I was always rather partial to ≅ which we used for congruence. I'm happy to share it with other mathematicians if they'll agree to keep their mitts off of ":=".
Is there a decent overview of what the aim of mathematics solvers is these days? In light of Hilbert's effort at formalism in the 1920s, and the incompleteness theorem, I was wondering where this was going, and what the known limitations are.
The incompleteness theorem isn’t an argument against formalism, it’s just a bound on what you can do with it (you can’t use one set of axioms and expect to prove everything). That’s ok, because you are,
1. Not required to prove everything which is true,
2. Not limited to one set of axioms.
The nice thing about formalism is that you can use a computer program to check whether a proof is sound or not. Isn’t that nice? It has applications in e.g. computer science and engineering, where you can prove that a program does not crash, or prove that a traffic light never shows green in conflicting directions, or prove that a real-time kernel always provides enough computational time for your rocket computer to steer.
Working math, there are more and more complex proofs that cannot reasonably be checked by humans. Having a computer check them is a nice alternative.
Current hopes (even by Terrence Tao) are that you give a sketch of a proof to GPT-X and it will give you a link to the formally confirmed proof.
Others hope that the next AI will come up with a sketch of a proof itself.
GPT-4 sucks at lean because lean had it's syntax changed too often, and GPT-4 shows limited reasoning capabilities
However it did come up with the following proof for my little order of pairs
instance LT : LT (ℕ × ℕ) where
lt f g := f.1 < g.1 ∨ (f.1 = g.1 ∧ f.2 < g.2)
instance : DecidableRel (LT.lt : ℕ × ℕ → ℕ × ℕ → Prop) :=
fun (a,b) (c,d) =>
if h₁ : a < c then isTrue (Or.inl h₁)
else if h₂ : a = c then
if h₃ : b < d then
isTrue (Or.inr ⟨h₂, h₃⟩)
-- the rest is even more silly since we have to turn everything around
else isFalse (λ (h : a < c ∨ (a = c ∧ b < d)) =>
Or.elim h
(λ hlt : a < c => absurd hlt h₁)
(λ heq : a = c ∧ b < d =>
absurd (And.right heq) h₃))
else isFalse (λ (h : a < c ∨ (a = c ∧ b < d)) =>
Or.elim h
(λ hlt : a < c => absurd hlt h₁)
(λ heq : a = c ∧ b < d =>
absurd (And.left heq) h₂)
)
This may lead back to the original question because Bool ≠ Prop and the formula for LT should ideally lead directly to decidability without all this redundancy?
> It is strongly recommended, but not strictly required that (x.compareTo(y)==0) == (x.equals(y)). Generally speaking, any class that implements the Comparable interface and violates this condition should clearly indicate this fact. The recommended language is "Note: this class has a natural ordering that is inconsistent with equals."
Why would you allow .equals and .compareTo to have inconsistent meanings?
That is a guaranteed source of bugs. I cannot think of a single good reason to allow this. If you need a different order for some reason, that is what the Comparator interface is for.
It is perfectly reasonable to want all sorted collections of X to have a canonical order, but to still allow people to consider equivalent items to be equal.
The example someone else gives with +0 and -0 is very good: in most contexts, those two should probably be equal (+0.equals(-0)), but when sorting a list, you might want all -0s to come before all +0s (-0.compareTo(+0) == -1).
Given that Comparable is almost exclusively used for sorting, this is actually quite unlikely to cause problems in practice.
Of course, it's a much bigger problem if you use entirely different concepts of equality. For example, a Name class where a.equals(b) only if they have the exact same Unicode code points, but where a.compareTo(b) == 0 if they have the same Unicode codepoints after normalization would be crazy.
Edit: and of course BigDecimal from the example below does exactly this crazy thing...
Yep, BigDecimal does this crazy thing, and it shouldn't. It does it, because the buggy spec allows it.
If the canonical equality you want to provide is not compatible with the canonical order you want to provide, then don't provide the canonical order. Or provide an order that is compatible, which makes it pretty much canonical. It is really simple, and going against simple things like that is why software is such a mess. Because software really allows you to do anything you want. And now it is down to you to want the right thing.
Yes, that makes this a footgun, and the specification containing this footgun buggy. It is a bug in the spec because there is no good reason for the footgun being there.
You have not yet imagined a reason, and neither have I. That doesn't mean one doesn't exist. Likely it would be something idiosyncratic involving legacy code that would only make sense in a real world context - "we have to change the way it sorts to fix ticket ABC123, but we can't change the equality, or we'll have a regression of ABC456."
This presumes a sequential model of computation but we are actually free to discover functional and concurrent models as they are equivalent, but give philosophical freedom and yet retain rigor.
I know it's not related to the content of the article but I hate when websites redirect me to a page of articles when I click the back button, if that isn't where I came from. I've already decided I want to go back to what I was looking at before, shoving a load of stuff that I didn't want into my face isn't going to make me change my mind
Leaving aside the more advanced definitions of equals, it'd be good if more emphasis were put on the difference between '=' and '≡' early on in one's training. That is explaining why is equivalent different to equals.
I recall the difference in the definitions being confusing for most of us kids.
The way I see it, the distinction between = and ≡ isn't really to do with equality having more than one meaning. An "identity" like sin^2 x + cos^2 x ≡ 1 is really just shorthand for "for every real number x, we have sin^2 x + cos^2 x = 1". The equals sign here has the same meaning as in a statement such as "there is a real number x such that sin^2 x + cos^2 x = 1"; the difference is in the surrounding language, and what meaning it assigns to the variable x.
So perhaps rather than just emphasizing the difference between = and ≡ more, it would be better to go further and emphasize the difference between universal and existential quantification more. Quantifiers can be confusing, but I think people also find having two different equals signs confusing; and it wouldn't be necessary to give a full account of predicate logic to high schoolers, I'm thinking more of just informally describing the difference between "for all" and "there exists" and reminding them that a bare variable has no meaning if you don't know what set it ranges over and how it's quantified.
This is just my speculation, I have no experience with mathematical education whatsoever.
"...emphasize the difference between universal and existential quantification more. Quantifiers can be confusing..."
Right, I've uni math also formal logic so I've knowledge of universal and existential quantification, etc. thus an understanding of the issues.
You're right, that stuff's a bit too heavy for highschoolers. Perhaps all that's needed is to be told 'that at times these appear the same but later on you'll need to understand there's a mathematical distinction' then emphasize the difference aspect to drive the point home.
Even though it was a long time ago I mostly recall what the teacher said and whilst he gave a few examples he never emphasized that there was a mathematical difference and that it was an important fact to know. Matters became more ambiguous from our science courses, the use of 'equivalent' was very loose.
I reckon the same or similar should apply to other topics, calculus for example.
I’ve been working on a glossary of “Notions of Equality & Sameness” every few months for the last ~8 years (and looking for collaborators: carson@carsonkahn.com) - surprised not to have had this one in my list already!
This isn't news to any logician or people working in formal methods. More like a mathematician getting his head around what has been thought about for decades, if not centuries.
I don't think you need to go to higher level math to run into this sort of problem.
When we're in school solving equations, getting to the initial stages of calculus, you start running into identities like sin^2(x) + cos^2(x) = 1, which sometimes but not often are written with three parallel lines instead of two.
What this means is that the functions on either side of the sign are equivalent, any x you pick doesn't change the graph.
At some point, you will be solving simultaneous equations, and someone will have the bright idea that you can just differentiate both sides and just solve that. This happens to be true if you have something like above, it's still valid that the derivative anywhere is the same, since they are the same function. But if you are actually just finding the crossing point of two graphs, like one normally does at that level, then the equality is only for a particular value of x, and there's no reason the derivative would be the same.
In general there's a lot of these little notation things that creep up around that stage. The big one is treating dy/dx like a fraction, which you can do in certain cases to solve differential equations, but only in certain cases. If you don't understand why, you might get very confused about what the symbols actually mean.
tl;dr as an acolyte is that sometimes the bug can be fixed by adding extra data (e.g. number of minus signs, or method to compute minus signs) to your declaration of equality.
Concrete example from Kev Buzzard's "Lean proof of Fermat's Last Theorem" from yesterday (2024-06-19 UT)
p34:
"It is impossible to say which of the two canonical isomorphisms is “the most canonical”;
people working in different areas make different choices in order to locally minimise the
number of minus signs in their results."
Every mathamatician knows that the integer 1 and the real number 1 different objects [0]. We could make every invocation of a canonical isomorphism explicit, but that would make all of math far too unwieldy to do.
This is the core problem with formal proof verification. Current provers need you to prove things at such a low level that they are simply unwieldy to use. That does not mean we need to reinvent all of math. It means we need to improve formal verification tooling. Maybe we also need to train mathamaticians in formal verification systems. In much the same way that they had to be trained to use computer algebra systems. We didn't reconstruct all of math. We just learned how to work within the limitations of tools when we use those tools.
[0] Unless you go out of your way to construct them to be the same thing. There is not actually any official definition of, for example, what the integer 1 is. Since so many definitions of it are equivalent, that is a fundamentally pointless conversation to have. A few textbook authors need to make a decision for pedalogical purposes. No one else cares.