Hacker News new | past | comments | ask | show | jobs | submit login
Surprising hidden order unites prime numbers and crystal-like materials (phys.org)
157 points by foxes on Sept 8, 2018 | hide | past | favorite | 39 comments



A brief explanation of why primes peak at repeated multiples from a layman who's wondered why before.

Obvious first example all primes above 2 are of the form 2x+1. An obvious repeating pattern of primes.

You can take this a small step further. All primes above 6 are of the form 6x+1 or 6x+5. Anything else is a multiple of 2 or 3. Above 6 only 1/3 of numbers are worthy of being considered prime. This is a slightly less obvious example.

A small step further - all primes above 30 are of the form 30x+1, 30x+7, 30x+11, 30x+13, 30x+17, 30x+19, 30x+23 or 30x+29. Anything else is a multiple of 2,3 or 5. So above 30 only 8/30 numbers are worthy of being considered prime. See how we've created a new pattern for the multiple of 2x3x5 to rule out a swath of prime candidates..

I could repeat this each prime found. eg. I could take the common multiple of 2,3,5,7 (210) and create a similar pattern for all numbers above 210 that rules out the repeated multiples of 2,3,5 and 7. (leaving us just 58/210 numbers worthy of being considered prime).

This is why you see peaks of primes at various repeating multiples. For every new prime found you can take the multiple of it and all previous primes. From that you can rule out primality for various offsets to any multiples of that number. So primes above certain numbers can only possibly exist in certain forms. Which is why you see primes at repeated patterns from each other - the primes can only exist in those forms.


This pattern forms the basis for wheel factorization [1], a faster way to factor a number than naïve trial division.

[1] https://en.wikipedia.org/wiki/Wheel_factorization



Author of this thread here. That article states that the above sieve was created in 2003.

I actually wrote and described it back in 2002 - https://forums.overclockers.com.au/threads/show-off.71630/#p...

I didn't think much of it back when i did it and i'm not a mathematician, just a hobbyist. I guess i should publish my prime hobbies more...


Have you though about mailing it to authors of the paper? The should somehow cite you, I guess.


That seems so obvious when you put it like that. (Although I guess all of mathematics is either "obvious" or "unsolved".)


I think most of what we've solved in math is not obvious.

Some accessible examples would be Fermat's Last Theorem or the four color problem.


Small correction: there are only 48 (not 58) numbers out of 210 that are not multiples of 2,3,5,7.


I haven't done all the math for this (I've deeply investigated the pattern for 2x+1) but it seems like this would be an obvious and intuitive result of primes. You are still generating primes from primes. Yes, you find more primes, but the computation is still dependent on primes. I'm still of the opinion that there is no complete pattern to the primes.

I'm assuming the researchers do not have the intent of confusing a crystal lattice structure with an actual mathematical lattice, because while they possibly may share similar influences in their models, one is math, the other is physics.

#keepmathpure


I've heard this sentiment before and I don't really understand it. There's no separating physics and math. Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.


> There's no separating physics and math.

There is. Even though most physics research is extremely mathematical and abstract these days, it's still ostensibly grounded in empirical science. Math is not science, it just provides useful tools and insights for studying science. Unlike physics, the disciplinary imperative of math is not to provide us with truths about this world or any other world. Its imperative is to tell us what must follow as a consequence from a given set of assumptions and definitions. This is a very important philosophical dichotomy because it means that even the most lackadaisical, abstract problems in physics (such as moonshine in high energy physics) are still grounded in something "real." Math need not be grounded to anything real; it can be decoupled from what is real or even possible entirely.

> Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.

I'm not one to be elitist with regards to pure versus abstract mathematics so I sympathize with your point here. That being said, purely abstract mathematics can be extremely useful even if it doesn't ultimately relate to the real world. Consider what G.H. Hardy wrote nearly a century ago in A Mathematician's Apology:

"...both Gauss and lesser mathematicians may be justified in rejoicing that there is [number theory] at any rate...whose very remoteness from ordinary human activities should keep it gentle and clean."

If only Hardy had lived long enough to see his pure and beautiful number theory sullied with the applications to error-correcting codes and cryptography.


While I certainly understand that math and physics are separate concepts, physics as we know it wouldn't be possible without math. I'm sure in your mind you can separate them, but if you took math away from physics, we wouldn't have modern physics.

Math is how physics is given practical application, if anything, that means science is more abstract than math is.


It's about direction of influence.

Physics should not influence math. Math has to be a consequence of it's own axioms, or have it's properties tested against it's own constructions.

Otherwise math falls apart. Then it's useless for physics.

Practical application is wonderful for math. But the direction math is crafted and interpreted matters a lot.


I don't agree with this. And honestly, I really think that depends on what foundation you rely on to think with, work with, create with, test with, and check your own tests with. Physics does not have to use itself to understand itself. Math does.


> Physics does not have to use itself to understand itself.

Could you explain what you mean by that?


It's simple. Physics uses mathematics to construct equations that describe properties of physics.

The difference is physics has reality to test against - observations that can be measured. Math does not have this. Math's only metric against itself is itself.


In your example you started at 1/3 (33.33%) and moved to 58/210 (27.62%). What does this limit approach ad infinitum?



Link to paper https://arxiv.org/abs/1802.10498

Also worth noting that Freeman Dyson pointed out the link between prime numbers, or the Riemann zeta function, and quasicrystals, see p. 2-3 of http://www.ams.org/notices/200902/rtx090200212p.pdf

And here’s some marhoverflow discussions on Dyson’s ideas: https://mathoverflow.net/questions/133581/quasicrystals-and-...


I don't find it at all surprising that we find a natural occurrence of prime number patterns in nature. It makes perfect sense to find them in crystals if you think about it for a while.

First remember that 'primeness' is not really a property of a number, it's the absence of the property of being composite. "Can only be divided..." meaning "Can't be divided..." meaning "Not composite."

Second crystals are formed by repeating patterns. What happens if you compose a pattern from many repeating patterns and overlay them? There would be 'features' where the patterns don't overlap.

Do papers try to make things seem difficult, exciting and mysterious on purpose?


> First remember that 'primeness' is not really a property of a number, it's the absence of the property of being composite.

What do you mean by this? What do you believe a "property" is?

> Do papers try to make things seem difficult, exciting and mysterious on purpose?

Yes, being surprising/exciting is a criterion for being published.


Certainly they could and are both (prime, composite) described as properties. The difference is that one can be described positively as 'having' and the other as 'not having' or 'having only'. If you've read Godel Escher Bach, it's very much in how it describes axiomatic space with 'foreground' vs 'background' when looking at the boundary of what's inside and outside a set of a given property. Compositeness is a construction. Primeness is what's outside.

A sort of example from the book that plays in the "backgroumd': https://en.wikipedia.org/wiki/Berry_paradox


Reminds me, whatever happened to the research into parallax compression?

http://www.novaspivack.com/science/we-have-discovered-a-new-...


I read once that prime numbers were a key element element on cryptography (because it's easy to multiply two prime numbers, but difficult to say if a number is a multiple of two prime numbers, if I remember correctly). Will this discovery have negative impact on it?


No it won't. This result is related to the problems of proving properties about the prime numbers (such as gaps) and of determining whether or not a given number is prime. It has nothing to do with the computational intractability of factoring large numbers.

RSA utilizes an extremely large semiprime (the product of two very large prime numbers) to generate a public/private key pair. This result does not meaningfully change anything related to the computational work required to factor semiprime numbers that have over 600 digits.


No.

It's the non-factorability of primes that is important.


> It’s the non-factorability of primes that is important.

Primes by definition have two, and only two factors.

The difficulty of factoring a number which is the product of two large primes is the important bit.


I think the usual abstract algebra definition is one factor: itself. 1 is not considered a factor so that prime factorizations are unique, otherwise you could tack on an infinity of ones.

Also Möbius becomes strange with infty factors.


The abstract algebra definition is that p is a prime if whenever p divides a product ab then p divides at least one of a and b.

Having no proper factors is the definition of an irreducible element.

The two definitions agree for the integers and nice algebraic structure (UFDs) but there are algebraic structures in which they are not the same which explains why there are two differenr definitions and names in abstract algebra


Pardon my extreme ignorance of the subject, but could you elaborate on why your statement is mutually exclusive of the OP? Not at all combative, just a sincerely interested layman :)


It doesn't matter to the algorithm that we know if it's a product of two primes. Anyone attacking an RSA private key knows that it is.

The only unknown is which two primes.


Which are easier to generate with a prime formula..


Define "easier" in this context. If you have an algorithm whose complexity increases exponentially with the key length, saying you made a 10% gain doesn't mean squat.


And it's wrong already. We already know all primes uses in encryption (just download a prime-number list).


This is wrong. We have lists of all primes used in crypto already. Just knowing them all doesn’t make attacks quicker.


Yes. If we can quickly generate large primes in a sequence then we can find the products easier.


You should read the paper authors' paper directly; a preprint version if available on arXiv.[1]

The authors don't provide a complexity analysis of their algorithm; in lieu of attempting to derive that analysis myself, I'm deeply skeptical that their algorithm can find arbitrarily large prime numbers in sub-exponential time; let alone the polynomial time needed for breaking classical public-key cryptography. The absence of such a complexity analysis is conspicuous because such an analysis would be groundbreaking.

However, I don't need to lean on the conspicuous absence of a proof of polynomial time complexity for their algorithm because their algorithm has explicit limitations. It's an approximation method (albeit with high accuracy) that can only find primes located in dyadic intervals having many Braggs peaks and a relatively small left endpoint. In the paper they represent this interval with (M, M + L); in general a dyadic interval is an interval of the form (k/2n, k+1/2n) which may be either open or closed.

In other words the algorithm has costly limitations, no proof of polynomial time complexity and the authors don't even study its behavior for a left endpoint greater than 10^6. RSA semiprime numbers have 600 digits - this result isn't even in the same galaxy. Considering the explicit limitations in the authors' paper and the lack of interesting analysis indicating otherwise, it's likely that the problem of finding a suitable dyadic interval containing a large semiprime number's factors will just reduce to the problem of finding its factors anyway.

For what it's worth, this is all distinct from a more conceptual problem that undermines any use this result could have for cryptanalysis. Take a look at the prime counting function.[2] Per the prime counting function, there are approximately 2.53274 × 10^305 primes and 2.27654 × 10^613 primes less than 2^1024 and 2^2048, respectively. It's physically impossible to construct a lookup table consisting of all products of all pairs of primes in either set because there isn't enough matter in the universe to contain the information. But even if there was, you would never feasibly pick out the correct semiprime number corresponding to a given RSA private key by doing lookups in this way.

_______________________________________

1. https://arxiv.org/pdf/1802.10498.pdf

2. https://en.wikipedia.org/wiki/Prime-counting_function


That’s not really true. We don’t find new primes to make cryptography safe.

We already have very large lists of primes. It’s choosing which primes are used that is the expensive part of attacking crypto.

Additionally the OP simplified how cryptography works too much in saying you just need to find the products to decrypt.


No mention of whether this knowledge can be exploited to factor integers? You know where I’m going with this...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: