A brief explanation of why primes peak at repeated multiples from a layman who's wondered why before.
Obvious first example all primes above 2 are of the form 2x+1. An obvious repeating pattern of primes.
You can take this a small step further. All primes above 6 are of the form 6x+1 or 6x+5. Anything else is a multiple of 2 or 3. Above 6 only 1/3 of numbers are worthy of being considered prime. This is a slightly less obvious example.
A small step further - all primes above 30 are of the form 30x+1, 30x+7, 30x+11, 30x+13, 30x+17, 30x+19, 30x+23 or 30x+29. Anything else is a multiple of 2,3 or 5. So above 30 only 8/30 numbers are worthy of being considered prime. See how we've created a new pattern for the multiple of 2x3x5 to rule out a swath of prime candidates..
I could repeat this each prime found. eg. I could take the common multiple of 2,3,5,7 (210) and create a similar pattern for all numbers above 210 that rules out the repeated multiples of 2,3,5 and 7. (leaving us just 58/210 numbers worthy of being considered prime).
This is why you see peaks of primes at various repeating multiples. For every new prime found you can take the multiple of it and all previous primes. From that you can rule out primality for various offsets to any multiples of that number. So primes above certain numbers can only possibly exist in certain forms. Which is why you see primes at repeated patterns from each other - the primes can only exist in those forms.
I haven't done all the math for this (I've deeply investigated the pattern for 2x+1) but it seems like this would be an obvious and intuitive result of primes. You are still generating primes from primes. Yes, you find more primes, but the computation is still dependent on primes. I'm still of the opinion that there is no complete pattern to the primes.
I'm assuming the researchers do not have the intent of confusing a crystal lattice structure with an actual mathematical lattice, because while they possibly may share similar influences in their models, one is math, the other is physics.
I've heard this sentiment before and I don't really understand it. There's no separating physics and math. Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.
There is. Even though most physics research is extremely mathematical and abstract these days, it's still ostensibly grounded in empirical science. Math is not science, it just provides useful tools and insights for studying science. Unlike physics, the disciplinary imperative of math is not to provide us with truths about this world or any other world. Its imperative is to tell us what must follow as a consequence from a given set of assumptions and definitions. This is a very important philosophical dichotomy because it means that even the most lackadaisical, abstract problems in physics (such as moonshine in high energy physics) are still grounded in something "real." Math need not be grounded to anything real; it can be decoupled from what is real or even possible entirely.
> Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.
I'm not one to be elitist with regards to pure versus abstract mathematics so I sympathize with your point here. That being said, purely abstract mathematics can be extremely useful even if it doesn't ultimately relate to the real world. Consider what G.H. Hardy wrote nearly a century ago in A Mathematician's Apology:
"...both Gauss and lesser mathematicians may be justified in rejoicing that there is [number theory] at any rate...whose very remoteness from ordinary human activities should keep it gentle and clean."
If only Hardy had lived long enough to see his pure and beautiful number theory sullied with the applications to error-correcting codes and cryptography.
While I certainly understand that math and physics are separate concepts, physics as we know it wouldn't be possible without math. I'm sure in your mind you can separate them, but if you took math away from physics, we wouldn't have modern physics.
Math is how physics is given practical application, if anything, that means science is more abstract than math is.
I don't agree with this. And honestly, I really think that depends on what foundation you rely on to think with, work with, create with, test with, and check your own tests with. Physics does not have to use itself to understand itself. Math does.
It's simple. Physics uses mathematics to construct equations that describe properties of physics.
The difference is physics has reality to test against - observations that can be measured. Math does not have this. Math's only metric against itself is itself.
I don't find it at all surprising that we find a natural occurrence of prime number patterns in nature. It makes perfect sense to find them in crystals if you think about it for a while.
First remember that 'primeness' is not really a property of a number, it's the absence of the property of being composite. "Can only be divided..." meaning "Can't be divided..." meaning "Not composite."
Second crystals are formed by repeating patterns. What happens if you compose a pattern from many repeating patterns and overlay them? There would be 'features' where the patterns don't overlap.
Do papers try to make things seem difficult, exciting and mysterious on purpose?
Certainly they could and are both (prime, composite) described as properties. The difference is that one can be described positively as 'having' and the other as 'not having' or 'having only'. If you've read Godel Escher Bach, it's very much in how it describes axiomatic space with 'foreground' vs 'background' when looking at the boundary of what's inside and outside a set of a given property. Compositeness is a construction. Primeness is what's outside.
I read once that prime numbers were a key element element on cryptography (because it's easy to multiply two prime numbers, but difficult to say if a number is a multiple of two prime numbers, if I remember correctly). Will this discovery have negative impact on it?
No it won't. This result is related to the problems of proving properties about the prime numbers (such as gaps) and of determining whether or not a given number is prime. It has nothing to do with the computational intractability of factoring large numbers.
RSA utilizes an extremely large semiprime (the product of two very large prime numbers) to generate a public/private key pair. This result does not meaningfully change anything related to the computational work required to factor semiprime numbers that have over 600 digits.
I think the usual abstract algebra definition is one factor: itself. 1 is not considered a factor so that prime factorizations are unique, otherwise you could tack on an infinity of ones.
The abstract algebra definition is that p is a prime if whenever p divides a product ab then p divides at least one of a and b.
Having no proper factors is the definition of an irreducible element.
The two definitions agree for the integers and nice algebraic structure (UFDs) but there are algebraic structures in which they are not the same which explains why there are two differenr definitions and names in abstract algebra
Pardon my extreme ignorance of the subject, but could you elaborate on why your statement is mutually exclusive of the OP? Not at all combative, just a sincerely interested layman :)
Define "easier" in this context. If you have an algorithm whose complexity increases exponentially with the key length, saying you made a 10% gain doesn't mean squat.
You should read the paper authors' paper directly; a preprint version if available on arXiv.[1]
The authors don't provide a complexity analysis of their algorithm; in lieu of attempting to derive that analysis myself, I'm deeply skeptical that their algorithm can find arbitrarily large prime numbers in sub-exponential time; let alone the polynomial time needed for breaking classical public-key cryptography. The absence of such a complexity analysis is conspicuous because such an analysis would be groundbreaking.
However, I don't need to lean on the conspicuous absence of a proof of polynomial time complexity for their algorithm because their algorithm has explicit limitations. It's an approximation method (albeit with high accuracy) that can only find primes located in dyadic intervals having many Braggs peaks and a relatively small left endpoint. In the paper they represent this interval with (M, M + L); in general a dyadic interval is an interval of the form (k/2n, k+1/2n) which may be either open or closed.
In other words the algorithm has costly limitations, no proof of polynomial time complexity and the authors don't even study its behavior for a left endpoint greater than 10^6. RSA semiprime numbers have 600 digits - this result isn't even in the same galaxy. Considering the explicit limitations in the authors' paper and the lack of interesting analysis indicating otherwise, it's likely that the problem of finding a suitable dyadic interval containing a large semiprime number's factors will just reduce to the problem of finding its factors anyway.
For what it's worth, this is all distinct from a more conceptual problem that undermines any use this result could have for cryptanalysis. Take a look at the prime counting function.[2] Per the prime counting function, there are approximately 2.53274 × 10^305 primes and 2.27654 × 10^613 primes less than 2^1024 and 2^2048, respectively. It's physically impossible to construct a lookup table consisting of all products of all pairs of primes in either set because there isn't enough matter in the universe to contain the information. But even if there was, you would never feasibly pick out the correct semiprime number corresponding to a given RSA private key by doing lookups in this way.
Obvious first example all primes above 2 are of the form 2x+1. An obvious repeating pattern of primes.
You can take this a small step further. All primes above 6 are of the form 6x+1 or 6x+5. Anything else is a multiple of 2 or 3. Above 6 only 1/3 of numbers are worthy of being considered prime. This is a slightly less obvious example.
A small step further - all primes above 30 are of the form 30x+1, 30x+7, 30x+11, 30x+13, 30x+17, 30x+19, 30x+23 or 30x+29. Anything else is a multiple of 2,3 or 5. So above 30 only 8/30 numbers are worthy of being considered prime. See how we've created a new pattern for the multiple of 2x3x5 to rule out a swath of prime candidates..
I could repeat this each prime found. eg. I could take the common multiple of 2,3,5,7 (210) and create a similar pattern for all numbers above 210 that rules out the repeated multiples of 2,3,5 and 7. (leaving us just 58/210 numbers worthy of being considered prime).
This is why you see peaks of primes at various repeating multiples. For every new prime found you can take the multiple of it and all previous primes. From that you can rule out primality for various offsets to any multiples of that number. So primes above certain numbers can only possibly exist in certain forms. Which is why you see primes at repeated patterns from each other - the primes can only exist in those forms.