I studied applied mathematics but I have to admit number theory has always been, and probably always will be, a total mystery to me. My whole intuition is tied to dynamics and computability, and this is just absolutely alien. I’m probably more in awe of number theorists than the average person because whereas they group them with mysterious ‘mathematicians’ (with all their attendant mystique and implied genius) I’ve actually “been there, [tried] to do that” and failed, and that marks it out as an almost insurmountable peak.
My intuition was already screaming 'it'll be the other way around for a geometric progression' before I read that far, but I'm damned if I can understand or even speculate why. Most likely I don't properly understand it.
I'm inclined to wonder if there's a third operator which could be tested like this, such as exponentiation, but that's not commutative over integers.
Of course, if there are similarly intriguing patterns for noncommutative operators (and their sequences) the obvious next step would be to look at complex numbers and quaternions...
> My intuition was already screaming 'it'll be the other way around for a geometric progression' before I read that far, but I'm damned if I can understand or even speculate why.
It's a question of forcing the number of sums/products to be low. An arithmetic progression forces a large number of identical sums for the obvious reason: (a+b) = ((a-k)+(b+k)) = ((a-2k) + (b+2k)), and so on, and those differences of k are... the definition of an arithmetic progression.
The reason a geometric progression produces a lot of identical products is exactly the same. (ab) = (a/k · bk) = (a/kk · bkk)...
It's not too surprising that there are fewer unique products for a geometric progression, because subsequent terms in a geometric progression are found by multiplying by the common ratio. There will be fewer unique products when all of the numbers share that common ratio.
Maybe I don't understand the full implications of the problem, but it seems obvious that a multiplicative table will result in more 'numerical diversity' than a summation table. Both multiplication and addition yield points on lines described by the usual mx+b equation. Addition constrains x to 1, so all of the results will be clustered near the origin, with more coincidences as an unavoidable result. Conversely, if x is allowed to exceed 1, the resulting 'lines' will spread out on the graph and leave more room for unique points.
”but it seems obvious that a multiplicative table will result in more 'numerical diversity' than a summation table”
That isn’t true. If you pick the set of numbers {1, 2, 4, 8}, you get 10 different sums (2, 3, 4, 5, 6, 8, 9, 10, 12, and 16) but only 7 different products (1, 2, 4, 8, 16, 32, and 64)
Sure, you can always find special-case exceptions, but I still don't see why the result is surprising in the general case. There are a lot more arithmetic progressions than geometric ones.
Ah but the conjecture isn't about what happens “on average” or for a “random set” of integers. It's about what is forced to happen in every possible case. (In the “random” case there are probably very few or no duplicates anyway, as both a+b=c+d and ab=cd will be unlikely for completely random integers — note that they say d is exactly a+b-c or ab/c respectively — the conjecture is about limiting what you could possibly arrange to happen, non-randomly.)
Basically, when you take integers like {1, 2, 3, 4} or {1, 3, 5, 7} (so that there are additive relations between them) (e.g. small integers), there tends to be a lot of duplicates in the addition table. But if you take integers like {1, 2, 4, 8} or {1, 3, 9, 27} (so that there are multiplicative relations between them) (e.g. very similar prime factorizations), there tend to be a lot of duplicates in the multiplication table. In other words, it's very easy to create sets with either lots of duplicates in the addition table or lots of duplicates in the multiplication table.
The conjecture is not about which kind of set is more common (so it doesn't matter even if it's true that there are a lot “more” arithmetic progressions than geometric ones, in whatever sense), but rather says there is no way you can create a set with both sorts of collisions — it's saying that if a set has many additive coincidences it can't have many multiplicative coincidences and vice-versa.
In some deep sense, what you are saying is true : for example, if an infinite sequence of numbers is sufficiently dense, then it has arbitrarily long arithmetic progressions.[1] On the other hand, Rankin in 1990 constructed a set with positive density which does not have long geometric progressions.
If you consider all finite sequences of positive integers with all values below some limit M, it might be the case that there are more arithmetic sequences than geometric sequences. I don't think that's relevant to the article, but might be where your head was when you made that post.
Did you read the article? The numbers doesn't have to be consecutive, for example what you said is only true for arithmetic progresions, but for geometric progressions it happens the opposite that multiplications has less diversity. Read the full article.
dunno, but it's neat to think about. multiplication can cover more ground in integer space than addition, so it can reach more unique places than addition for a given set of numbers. it's kinda like comparing how many squares knights and pawns can reach on a chessboard for a given number of moves.
If you give well-spaced source numbers, they can reach the same number of spaces.
For products it's easy: distinct prime numbers.
For addition you can pick numbers so that each one is at least [previous number] x [number of moves] + 1. To keep it extra simple go with [number of moves + 1]^n.
So this is very easy if the number of moves is known upfront, and impossible if it's not.
And in the source puzzle the number of moves is always 2.
This is maths we are talking about, you should never assume that the common usage of words applies to the precise definition used in the maths context. Take the Boolean inclusive "or", for example.
'connection' is a somewhat misleading way of stating it. It's more like the opposite of a connection. A set of numbers cannot have trait A and trait B at the same time.
Trait A is that the numbers have a certain kind of similarity in their spacing, leading to few unique sums. Trait B is that they have a certain kind of similarity in their factors, leading to few unique products.
So from some angle it's interesting that you can't impose both types of patterns on a single set of numbers, because that inability implies a relation between two very different operations.
But on the other hand why would you assume that you should find an intersection between two restrictive algorithms? Maybe there's a hidden assumption that the algorithms would be random-ish and let you find an overlap if you search hard enough, and that assumption is screwing with people's intuition.
> Trait A is that the numbers have a certain kind of similarity in their spacing, leading to few unique sums. Trait B is that they have a certain kind of similarity in their factors, leading to few unique products.
This is an excellent way to describe it. I had the sense that “of course the geometric sequences will collide more when multiplying” but if you asked me “why?” I would have trouble explaining. So thanks.
As far as finding a set that has an equal number of distinct sums and products, would it be possible to come up with a sequence that’s “half”-arithmetic and “half”-geometric?
Even a single value that doesn't fit in one sequence will cause a lot of extra values, so your ability to fit multiple patterns gets worse and worse as you go longer. Something like 2 4 6 8 16 does pretty well for its size but even then it's giving you twelve distinct outputs for either operation.
> But on the other hand why would you assume that you should find an intersection between two restrictive algorithms?
I think it's key here that these two algorithms are so basic and fundamental to the structure they are executed upon (the naturals) so that every other algorithm will use them in some way. The very notion of "algorithm over the naturals" and intuition of what it means for such an algorithm to be restrictive are determined by the nature of these operations.
Put like this, it seems far less obvious to me that there is an a priori reason why the results of these two operations should or shouldn't coincide. Both scenarios seem plausible before we study problems like these.
the connection is that picking your numbers so that you end up with a lot of duplicate sums constrains you to have very few duplicate products, and vice versa.
I think it's more like having a low number of one precludes having a low number of the other. I may be wrong, but I think you can have a large nu.ber of borh.
The real point is that no one gets the hidden connection. There are lots of tantalizing clues in number theory that lead us to believe there is some fundamental connection between addition and subtraction but we can't state it outright. This toy problem is like a black box in that we can play with it and quantify it's behavior but we don't know (exactly) why it works.
I think you mean “addition and multiplication”, not “multiplication and subtraction”... there definitely is a lot of evidence that addition and subtraction are related, but that’s just plain trivial.
Heh, no worries. I got it wrong too... I meant to write “addition and subtraction” and ended up writing “subtraction and multiplication”... so that’s what I get for trying to be a pedant (and boy do I deserve it).
[e: that lecture is almost a decade older than the result mentioned in OP]