So? I mean it's probably nice to have, but the performance of Redis isn't important enough to be a factor in the scalability of this.
I just tested a binary search of 8 million random 100 byte DNA strings written in Go. 10,000 requests for autocompletions of random 5 byte prefixes yielded 78 million results in 38 milliseconds. And this is with the strings dotted around in memory after sorting them. Further optimizations are possible, but if they were hoping to prove some kind of performance benchmark with this, they have failed.
Algorithmically this is nothing exceptional or that you can't reproduce easily, but this is even more true for like linked lists, that are the base of Redis List type. The point is to have this capabilities with a network interface, persistence, replication, and so forth.
I think it's just meant to demonstrate a novel use case for Redis. This will often be performant enough, in which case using out-of-the-box Redis will be favorable over writing and testing custom software.
However I'd also be interested to see that code, if it's open source. I've been spending lots of time with Go lately. :)
I just tested a binary search of 8 million random 100 byte DNA strings written in Go. 10,000 requests for autocompletions of random 5 byte prefixes yielded 78 million results in 38 milliseconds. And this is with the strings dotted around in memory after sorting them. Further optimizations are possible, but if they were hoping to prove some kind of performance benchmark with this, they have failed.