> cryptographic algorithms are largely implemented relying on primes
One of the steps in generating an RSA key is, "Randomly generate two large prime numbers p and q. Multiply p and q, and save the product pq = p*q as part of your public key."
If anyone can figure out p and q from pq, then they can figure out your private key.
A while ago there was an article on HN about using the Euclidean algorithm to break SSL.
If you have two keys pq and rs, and one of the factors is the same (like p = r), then there's a well-known algorithm dating back thousands of years which you can use to quickly figure out the common factor's value.
So what the authors of a paper did was gather a couple million public keys from SSL websites, then run the Euclidean algorithm on each pair of keys, and many of them had common factors. Why? As noted above, the prime numbers are supposed to be picked "randomly," but they weren't picked randomly enough. (Getting a computer to produce random numbers is an interesting problem. The usual approaches are "faking it" -- technically called pseudorandom numbers -- and using input from hardware that interfaces with the outside world, like the system clock, keyboard, mouse, network, etc.)
There's even more at work here. In the context of RSA and DHMW there are bad primes and good primes. A prime p is potentially bad if p-1 has no large factors. The algebraic structures associated with pq are then combinations of small algebraic structures, and that can (sometimes) make pq easier to factor.
So not just any old primes will work, and if two unrelated, unassociated people use the same software to generate "good" primes, sometimes they will coincide, leading to the potential for finding their factorizations with GCD on pairs.
There is little point in concerning oneself with smooth p-1, p+1, Φ_i(p) values. Randomly-chosen primes will have a negligible chance of having such properties.
Additionally, the more powerful ECM algorithm can be parametrized to generate about p different elliptic curves E_{a,b}(p), where p is found whenever E_{a,b}(p) has a smooth order. Once again, the chance of finding a pair (a,b) that results in such a smooth order is negligible. This is, in a nutshell, the result of Rivest and Silverman [1] dispelling the need to jump through many hoops to generate strong primes.
Right, so what you're saying is that when the primes are smaller, as they used to be able to be, strong (and safe) primes were a good idea because the earlier methods of factoring could exploit their potential weaknesses. However, it's now the case that with ECM and GNFS methods of factoring we've had to move to much, much larger primes, so the weaknesses are no longer relevant.
> two unrelated, unassociated people use the same software to generate "good" primes, sometimes they will coincide
I thought the GCD attack's success was more due to the lack of good entropy (randomness) in the random number generation, than due to a lack of eligible primes of the requested length.
This makes much sense -- some people might run the random-number generation on server systems, which often don't have any sources for human input (mouse movements and keystrokes can be a great source of randomness).
Or maybe it's simply a case of people using /dev/urandom instead of /dev/random because it's faster. (Which it is, but it's also not something you want to use for something as important as generating a key for long-term use.)
yes, prime factorization is the predominant method for making cryptography "hard" to brute force. Earlier in it's history, discrete logarithms were also used to make brute force un-palatable, but factorization based cyrptography rules the roost right now.