Do you even understand the significance of "lock-free" in the title here? Do you have some basis to think that the weaknesses of the data structures, like unbalanced trees and add-only collections, can be remedied without using locks? If you do then that would be an interesting criticism, but as it is it looks like you just opened the page, looked around for a few things to criticize, and then wrote this sour, shallow, dismissive comment.
A quick google shows that there are lock-free implementations of AVL trees and red-black trees. And every paper I've ever read about lock-free data structures includes both addition and removal operations.
Yes. I have seen the white paper for the red-black tree, and the approach used should work for AVL (I've not seen that paper). I've just not had time to implement yet. A full O(log n) data structure is necessary for the library to be really useful piece of work - but there's a lot of other things which are necessary for that, too, and they have so far taken up most of the development time.
About 10% of the effort so far has gone into coding the actual data structures - almost all the work goes into testing, benchmarking, performance tuning, documentation, porting to new platforms, fundamental improvements in the basic design and so on. The benchmark app alone has taken all in all about six full months to write. I thought it'd be simple, it turned out to be a tar-pit!
Honestly, I wouldn't bother with red-black or AVL. I'm not sure about the current state of lock-free data structures, but I prefer a B-Tree to a red-black or AVL tree. At least in the single-threaded case B-Tree operations tend to be faster, have better cache locality, etc. Google reveals there are lock-free implementations of B-Trees, so maybe look into those instead?
Could be. I will look into the choice before I commit - figuring out how one of these complex lock-free data structures works and implmenting it correctly and writing all the tests cases is a major undertaking. It's something you want to make sure you're making the right choice about before you invest the time.
As an aside, binary trees turn out to be really great with lock-free. In the obvious locking implementations, there's one lock per tree. You can use read-writer locks to get parallelism on read, but even so, the mere existance of a single lock of any kind induces memory contention which eliminates scaling as processor counts rise.
I'm sure there are in the locking literature better solutions, but I'm up to speed with lock-free, rather than locking, solutions, so I don't much know about them.
Regarding lock-free, there is no single lock, and btrees are beautiful because they naturally distribute memory accesses throughout the tree. Performance basically scales linearly. There's a gnuplot on the site from a 16 core Amazon VM, and that property can clearly be seen. Every new core simply adds another 200k benchmark ops per second. For the other, non-scaling data structures, new cores are death.
For example, here is the link to the rationale for the versioned naming scheme:
http://liblfds.org/mediawiki/index.php?title=Introduction#Co...
Do you even understand the significance of "lock-free" in the title here? Do you have some basis to think that the weaknesses of the data structures, like unbalanced trees and add-only collections, can be remedied without using locks? If you do then that would be an interesting criticism, but as it is it looks like you just opened the page, looked around for a few things to criticize, and then wrote this sour, shallow, dismissive comment.