Hacker News new | past | comments | ask | show | jobs | submit login

Indeed, in terms of complexity theory I would guess, not in terms of milliseconds.



Read http://en.wikipedia.org/wiki/Big_O_notation

Basically it's talking about time complexity, or how fast does the time it takes to compute the solution grow as the input grows in size.


An Algorithm with a lower time complexity will always take a shorter amount of time once the problem size grows large enough. You could run one algorithm on a modern cpu, utilizing Simd and tune for cache coherency and the other in QBASIC on an old 386 and the O(n log n) algorithm would still terminate earlier than the O(n^2) one past a certain n.


the problem is that while dropping from n^2 to n log n is a big deal, dropping from (n log n) to n won't help much if you end up increasing the constant costs at the same time.


It will eventually. Practical problems are pretty much solved.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: