Yes. The reason for this is that every version of the library can be concurrently included and linked against. The reason for this is so that when a new version of the library is incorporated, existing code does not even have to be revalidated, because the code it is using is unchanged.
In this latest release, I've actually moved over to a single repo, which will gain a new directory for each new release. The reason for this is that the benchmark app builds and links against earlier version of the library, so it can benchmark them. The gnuplots show not only the performance of locking data strcutures running the same benchmarks, but also of the earlier versions of liblfds.
On Linux and maybe Solaris you can use ELF symbol versioning.
Otherwise, if the concern is both libfoo, libbar, and bin/acme linking to liblfds, then they should be linking it statically. Either liblfds or the depending code (libfoo, libbar, and bin/acme) can be compiled in such a way that liblfds symbols do not leak outside the component they're used in. On modern systems you're not limited to static or extern linkage scopes, even across compilation units.
Windows might be difficult. But shared, dynamically linked DLLs aren't common on Windows. At least, not in the sense where the DLL can be sourced independently. That's because historically DLLs were only compatible with code compiled with the same Visual Studio release (because CRT intrinsics and data structures were not backwards or forwards compatible), which meant that in practice you always bundled dependencies, even if dynamically linked, to ensure everything was compatible.
> On Linux and maybe Solaris you can use ELF symbol versioning.
Remember that the library supports bare metal platforms. It is intended to be used in the absence of an operating system.
I'm afraid I'm not sure I undersand what you've written.
My aim is to ensure that for applications using the library, that when a new release comes out, that they can use it while seeing absolutely no change in the code they are already using (header files, the binary being linked to, etc), as this is the and the only way to know that existing code will be wholly unaffected.
Look at other popular libraries, and consider how they solve this problem - and whether it even is a problem for them (and if not, why).
Usually, this sort of versioning is done if and only if a breaking API change occurs, and such changes are avoided if possible for that very reason. If you only rev the major version number on breaking changes, then you only need to reflect that number in your API (and even then only if/when you actually rev it).
I an of the view that it is a problem, but one which is almost wholly ignored and as such, not addressed.
Generally, software projects version as you have described, based on breaking the external API. If the API is unchanged, the library is deemed to be effectively unchanged.
I am not happy with this. ANY change in a library requires revalidation - and so the only way to deal with this is to have no change at all, not even in the library binary. To do otherwise is to assume correctness, which as we know, given the nature of software, this is something which is never always true.
This level of care about change is I think almost never used and normally software libraries take for themselves the advantages of fixing versioning at the API-breaking level - it makes development easier, at some of cost to the users of reliability (if the users accept the changes without revalidation) or at some cost to the users of work (if the users revalidate).
I think a library should act as a factorization - it should take into itself all possible work which would otherwise have to be performed by ALL its users, so that that multiplicity of work is converted into a single instance of that work, performed by the library authors.
https://github.com/liblfds?tab=repositories