Hacker News new | past | comments | ask | show | jobs | submit login

No. This is outright WRONG. As others have pointed out, complexity classes describe asymptotic complexity. In other words, Big O describes scaling properties of the algorithm with regards to input size. The value is rough regression shape of the performance function if computed from 0 to infinity.

O(1) is by no means expected to outperform O(N^2) for any practical dataset. One algorithm is constant time (even though the constant can be so large that it will not finish before heat death of the universe), the other gets quadratically slower as the input size increases.

For example if O(n^2) algorithm can be implemented with SIMD, you could easily expect it to heavily outperform O(n) algorithm on many practical datasets when wall-clock performance is measured.

You can't draw immediate conclusions about performance of the algorithm from Big O class alone.




Yup, something that often is omitted about O notation is absolute per-item and setup/activation cost.

SIMD is also not easy to reason about as the formula changes quite a bit once you become bottlenecked by bandwidth - Zen 5's AVX512 implementation lets you blaze through data with 128-256B/cycle (think >128B/0.25ns) which is absolutely bonkers. But that quickly comes to a halt once you exceed data in L1 and L2 - streaming data from/to RAM is just about 32B/c.

So the consideration of memory traffic becomes important, where "better by Big O classification" algorithms with more granular data access might start to outperform.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: