The O(NlogN) algorithms all have very large constant factors - IIRC the crossover point from karatsuba or tooms to FFT based solutions (the NlogN algorithms) is in the thousands of digits.
The recent O(n log n) algorithm is only useful for numbers of size larger than 2^(2^(9^12)). Which is to say, this will never be of practical use in our universe. (Of course, conceivably someone else could invent a different algorithm of the same asymptotic complexity with a more reasonable constant factor.)
But it’s still nice to have a proof that this lower bound is attainable in theory. Makes it easier to write down the theoretical lower bound for a wide variety of other algorithms.
If it could go lower than n log n, that would have huge implications for signal processing. Multiplication, convolution, and FFTs are all so similar to each other.
Conventionally n is the length of the input, which is generally logarithmic in the numerical value of the input for algorithms dealing in integers.
Moreover, analyzing Russian peasant multiplication as O(log N) implies treating addition as constant-time, which it certainly isn't. (This is intuitively clear if we try applying the algorithm with slightly large integers!)
O(n·logn) has for some time been conjectured to be the lower bound.
[0] https://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strasse...
[1] https://hal.archives-ouvertes.fr/hal-02070778/document
[2] https://news.ycombinator.com/item?id=19474280