Radix sort is theoretically O(N), but memory access is logarithmic so in reality you can't do better than O(log N) no matter what algorithm you use. Only constant factors matter at that point.
Edit: I misremembered, memory access is actually O(sqrt(N)):
Nothing theoretical about it: Sorting a list of all IP addresses can absolutely and trivially be done in O(N)
> in reality you can't do better than O(log N)
You can't traverse the list once in <N so the complexity of any sort must be ≥N.
> but memory access is logarithmic
No it's not, but it's also irrelevant: A radix sort doesn't need any reads if the values are unique and dense (such as the case IP addresses, permutation arrays, and so on).
The author ran out of memory; They ran a program that needs 10GB of ram on a machine with only 8GB of ram in it. If you give that program enough memory (I have around 105gb free) it produces a silly graph that looks nothing like O(√N): https://imgur.com/QjegDVI
The latency of accessing memory is not a function of N.
The latency of accessing physical memory is asymptotically a function of N for sufficiently large N - ie. big enough that signal propagation delay to the storage element becomes a noticeable factor.
This is not generally relevant for PCs because the distance between cells in the DIMM does not affect timing; ie. the memory is timed based on worst-case delay.
> The latency of accessing memory is not a function of N.
How could it not be, given that any physical medium has finite information density, and information cannot propagate faster than the speed of light?
And on a practical computer the size of N will determine if you can do all your lookups from registers, L1-L3, or main RAM (plus SSD, unless you disable paging).
I feel we must be talking past each other. I thought the examples so far were about random access of N units of memory. The standard convention of pretending this can be done at O(1) always struck me as bizarre, because it's neither true for practical, finite, values of N (memory hierarchies are a thing) nor asymptotically for any physically plausible idealization of reality.
You still need to write each element to the target, which costs O(sqrt(N)), so the "strict" running time of radix sort would be O(N^1.5) in layered memory hierarchies. In flat memory hierarchies (no caching) as found in microcontrollers, it reduces back to O(N).
But radix sort doesn't write randomly to the output array like that. Yes, each element ends up in the right place, but through a series of steps which each have better locality of reference than random writes.
In this way, radix sort is usually faster than an "omniscient sort" that simply scans over the input writing each element to its final location in the output array.
If you can't see the clear sqrt(N) behaviour in those plots, then I recommend using an online graphic calculator to see what such a graph actually looks like. The sqrt trend is plain as day.
Also radix is a pretty special case because it assumes you want to sort by some relatively uninteresting criteria (be honest, how often are you sorting things by a number and only a number?). What happens in the real world is that the size of fields you want to sort on tends to grow in log n. If you had half a billion John Smiths using your service you’d use some other identifier that is unique, and unique values grow in length faster than logn.
I’m glad other people are having this conversation now and not just me.
Here’s a different take: if you find yourself needing to sort a lot of data all the time, maybe you should be storing it sorted in the first place.
Stop faffing about with college text book solutions to problems like you’re filling in a bingo card and use a real database. Otherwise known as Architecture.
I haven’t touched a radix sort for almost twenty years. In that time, hardware has sprouted an entire other layer of caching. I bet you’ll find that on real hardware, with complex objects, a production-ready Timsort is competitive with your hand written radix sort.
I develop programs that take unsorted data as input, and sort them as part of the processing... I haven't touched a database in 10 years, because I do graphics programming, nothing that needs a database. Also, I don't actually use radix sort but a hand written CUDA counting sort that beats the crap out of radix sort for my given use cases, since counting sort is often used as part of radix sort, but simple counting sort is all I need.
> and use a real database.
I'm not going to use a database to store tens of billions of 3D vertices. And I'm not going to use a database to sort them, because it's multiple times, probably orders of magnitude faster to sort them yourself.
It's weird to impose completely out-of-place suggestions onto someone who does something completely different to what you're thinking of.
Radix sort works for numbers and so also for characters. Then it also works for lexicographic ordering of finite lists of any type it supports. So it can sort strings. But also tuples like (int, string, float). So it can actually sort all plain old data types.
MSB Radix sort is a pretty good fit for this John Smiths input and will definitely outperform a comparison-based sort that has to check that "John Smith" equals itself n log n times.
random memory access has a non-constant upper bound (assuming infinitely ever larger and slower caches), but radix sort is mostly linear memory access.
Sure, you can force a fiction of O(1) by dramatically increasing latency and strictly limiting the size of memory, as we do with microcontrollers. This would now be O(1) with a very large constant factor overhead, basically pinning memory access to the latency of the memory cells that are furthest from the CPU.
I'm not disputing you can establish an upper bound on latency. You can always do this by using a system's slowest component as the upper bound and pin everything else's latency to that bound, as I said. I'm just pointing out that this upper bound a) doesn't scale well/leaves a lot of performance on the floor, and b) the upper bound is very sensitive to size and geometry.
In high performance systems, constant time random access is just not constant.
Edit: I misremembered, memory access is actually O(sqrt(N)):
https://github.com/emilk/ram_bench