Hacker News new | past | comments | ask | show | jobs | submit login

> I didn't know std::uniform_int_distribution doesn't actually produce the same results on different compilers

I think this is genuinely my biggest complaint about the C++ standard library. There are countless scenarios where you want deterministic random numbers (for testing if nothing else), so std's distributions are unusable. Fortunately you can just plug in Boost's implementation.




It's actually really important that uniform_int_distribution is implementation defined. The 'right' way to do it on one architecture is probably not the right way to do it on a different architecture.

For instance, Apple's new CPUs has very fast division. A convenient and useful tool to implement uniform_int_distribution relies on using modulo. So the implementation that runs on Apple's new CPUs ought to use the modulo instructions of the CPU.

On other architectures, the ISA might not even have a modulo instruction. In this case, it's very important that you don't try to emulate modulo in software; it's much better to rely other more complicated constructs to give a uniform distribution.

C++ is also expected to run on GPUs. NVIDIA's CUDA and AMD's HIP are both implementations of C++. (these implementations are non-compliant given the nature of GPUs, but both they and the C++ standard's committee have a shared goal of narrowing that gap) In general, std::uniform_int_distribution uses loops to eliminate redundancies; the 'happy path' has relatively easily predicted branches, but they can and do have instances where the branch is not easily predicted and will as often as not have to loop in order to complete. Doing this on a GPU might be multiple orders of magnitude slower than another method that's better suited for a GPU.

Overzealously dictating an implementation is why C++ ended up with a relatively bad hash table and very bad regex in the standard. It's a mistake that shouldn't be made again.


But reproducibility is as important as performance for the vast majority of use cases, if these implementation-defined bits start to affect the observable outcomes. (That's why we define the required time complexity for many container-related functions but do not actually specify the exact algorithm; difference in Big-O time complexity is just large enough to be "observed".)

A common solution is to provide two versions of such features, one for the less reproducible but maximally performant version and another for common middle grounds that can be reproduced reasonably efficiently across many common platforms. In fact I believe `std::chrono` was designed in that way to sidestep many uncertainties in platform clock implementations.


> Overzealously dictating an implementation is why C++ ended up with a relatively bad hash table and very bad regex in the standard.

What parts of the standard dictate a particular regex implementation? IIRC the performance issues are usually blamed on ABI compatibility constraints rather than the standard making a fast(er) implementation impossible.


Nobody is using standard library for high-performant random number implementations.


> I think this is genuinely my biggest complaint about the C++ standard library

What do you think of Abseil hash tables randomizing themselves (piggybacking on ASLR) on each start of your program?


Their justification is here https://github.com/abseil/abseil-cpp/issues/720

However, I personally disagree with them since I think it's really important to have _some_ basic reproducibility for things like reproducing the results of a randomized test. In that case, I'm going to avoid changing as much as possible anyways.


> There are countless scenarios where you want deterministic random numbers (for testing if nothing else), so std's distributions are unusable. Fortunately you can just plug in Boost's implementation.

I don't understand what's your complain. If you're already plugging in alternative implementations,what stops you from actually stubbing these random number generators with any realization at all?


It's a compromised and goofy implementation with lots of warts. What's the point it in having a /standard/ library then?


> It's a compromised and goofy implementation with lots of warts.

I don't think this case qualifies as an example. I think the only goofy detail in the story is expecting a random number generator to be non-random and deterministic with the only conceivable usecase being poorly designed and implemented test fxtures.

> What's the point it in having a /standard/ library then?

The point of standardized components is to provide reusable elements that can be used across all platforms and implementations, thus saving on the development effort of upgrading and porting the code across implementations and even platforms. If you cannot design working software, that's not a problem you can pin on the tools you don't know how to use.


> The point of standardized components is to provide reusable elements that can be used across all platforms and implementations, thus saving on the development effort of upgrading and porting the code across implementations and even platforms.

It's a shame that C++'s "standardized" components ARE COMPLETELY DIFFERENT on different platforms.

Some of the C++ standard requires per-platform implementation work. For example std::thread on Linux and Windows obviously must have a different implementation. However a super majority of the standard API is just vanilla C++ code. For example std::vector or std::unordered_map. The fact that the standard defines a spec which is then implemented numerous times is absurd, stupid, and bad. The specs are simultaneously over-constrained and under-constrained. It's a disaster.


I consider the current tradeoff to be a feature.

It permits implementations to take advantage of target-specific affordances (your thread case is an example) as well as taking different implementation strategies (e.g. the small string optimization is different in libc++ and libstdc++). Also you may use another, independent standard library because you prefer its implementation decisions. Meanwhile they remain compatible at the source level.


Unlike in C, in C++ it is not possible to use an independent implementation of the standard library.

Clang is compatible with GCC's standard library/libstdc++ and MSVC's standard library because the clang compiler explicitly supports them, but it's not possible to use clang's standard library with GCC in a standard conforming way or interchange GCC's with MSVC's standard library.

There are some hacks that let you use some parts of libc++ with GCC by using the nostdlib flag, but this disables a lot of C++ functionality such as exception handling, RTTI, type traits. These features are in turn used by things like std::vector, std::map, etc... so you won't be able to use those classes either, and so on so forth...


> but it's not possible to use clang's standard library with GCC

Of course it is. Both libc++ and GCC do make the effort to keep that compatibility going.

> in a standard conforming way

What is that supposed to mean here? GCC doesn't let you simply specify -stdlib=libc++ but while it's unfortunate it just means that you have to use -nostdlib++ and add the libc++ include and linking flags manually.

> There are some hacks that let you use some parts of libc++ with GCC by using the nostdlib flag, but this disables a lot of C++ functionality such as exception handling, RTTI, type traits. These features are in turn used by things like std::vector, std::map, etc... so you won't be able to use those classes either, and so on so forth...

-nostdlib++ is not a hack but the documented way for using a different standard library implementation. This doesn't prevent you from using exceptions and other runtime functionality.


As proven by musl versus glibc issues, that possibility is mostly theoric, with plenty of gotchas in practice.


Which musl issues are to do with GCC? Alternate C libraries are common on Linux e.g. uclibc, dietlibc, bionic... Not to mention also the other OSs GCC runs on that don't use glibc. Of course, mixing C libraries between the main executable and libraries probably won't work.


As there are common people on forums searching for problems with their C code crashing and burning, because while those libraries conform to ISO C standard, their implementation defined semantics aren't the same.


But the original post's meaning of "independent" runtime library was "compiler independent" (vs C++). Not that there was no difference between the C libraries.


If there is a difference, it is no longer independent.


This makes no sense. As I said the original poster was saying that g++ and libstdc++ are tangled together and it is not possible to use g++ with another implementation of the C++ libraries. But you can use gcc with other C libraries. As proven by gcc running on other OSs with non glibc libraries.

If anything isn't "independent" here (using your definition) it's the apps that rely on implementation detail - not the C library, the C programming language, or the compiler.


musl and glibc are both compatible for the POSIX portion of the API they provide. GCC also has lots of GNU specific functions but so what?


Square peg into rectangular hole.

ISO C and POSIX have plenty of implementation defined behaviours.


> Unlike in C, in C++ it is not possible to use an independent implementation of the standard library.

It sure is, and is pretty easy to do. I know companies using EASTL, libcu++, and HPX, as well as more who use Folly and Abseil which have alternate implementations to much of the standard library.

These days many languages have a single implementation so some people whose experience is only in those environment complain that C++ is “unnecessary complicated to use”. But a lot of that flexibility and back compatibility is what allows these multiple implementations to thrive while representing different points in the design/feature space.


None of those are implementations of the C++ standard library. None of them even live in the same namespace as the standard library so your claim that they remain compatible at the source level is nonsense. Just a simple Google search would reveal that you are wrong about this and what's worse is that you place the burden on me to have to disprove your wrong assertions as opposed to providing references that justify your position:

Folly https://github.com/facebook/folly:

"It complements (as opposed to competing against) offerings such as Boost and of course std. In fact, we embark on defining our own component only when something we need is either not available, or does not meet the needed performance profile."

libcu++: https://nvidia.github.io/cccl/libcudacxx/

"It does not replace the Standard Library provided by your host compiler (aka anything in std::)

Incremental: It does not provide a complete C++ Standard Library implementation."

Abseil: https://github.com/abseil/abseil-cpp

"Abseil is an open-source collection of C++ library code designed to augment the C++ standard library. Abseil is not meant to be a competitor to the standard library"


> Just a simple Google search would reveal that you are wrong about this and what's worse is that you place the burden on me to have to disprove your wrong assertions as opposed to providing references that justify your position:

How rude. Have you used any of those libraries or perhaps relied on Google's "AI"-generated answer?

> None of those are implementations of the C++ standard library.

HPX, EASTL specifically are, the latter being heavily used in (unsurprisingly) the gaming comunity).

libcu++ is for your target compilation.

As for Folly and Abseil, I wrote "as well as more who use Folly and Abseil which have alternate implementations to much of the standard library" (i.e. not ful drop in replacements).

So really I don't know what your point is: you made an assertion, then replied not to what I actually wrote but by simply doing a quick google search and using that as your conclusion.

I have used all of HPX, Folly and Abseil but I guess the top of a google search result is more authoritative.


EASTL is not a C++ standard library implementation even if it's design does follow it in parts.


My point is simple, you are poorly informed on this topic and should refrain from speaking about it.

None of the libraries you listed are independent implementations of the standard library let alone source compatible.

EASTL does not claim to be an implementation of the C++ standard library, its claim is that it is an alternative to the C++ standard library. Perhaps the distinction is too subtle for you to have actually understood it but one thing is obvious, you have clearly never used it.


I'm pretty sure that gumby forgot more about c++ and c++ implementations than the rest of us will ever know.


> as well as taking different implementation strategies (e.g. the small string optimization is different in libc++ and libstdc++

As a user this is really not a good thing when the stdlib is tied to the platform. In the end the only sane thing to do if you want any reproducibility across operating systems is to exclusively use libc++ everywhere.


>In the end the only sane thing to do if you want any reproducibility across operating systems is to exclusively use ...

no std library code in portable code. FTFY.

Ofc there are no absolutes. In gamedev essential type info bits and intrinsics are either allowed back or wrapped. Algorithm library is another bit allowed for the most part ( no unstable sort and such ).

I know your approach of 'one ring-libc++ to rule them all' is much more popular in community but gamedevs needed cross platform code for long time. It always had been working well regardless of the opinions.


> no std library code in portable code. FTFY.

Just want to clarify what I think you meant: no std library code in portable binaries, something I agree with 100%.

If you distribute in source, I believe almost always the opposite is true: relying on the standard library is probably a win. Not every time: some complex code bases have nonstandard requirements or can benefit from nonstandard code. Gaming is a good example of this.


I am not sure I understand. You declare a win with no explanations or reasons.

What is so beneficial in having different implementation of same functionality? Why source being available makes difference?

We can ignore cases of optimized per platform implementations because std was not made for that. None of platforms that make sense to support now were available back when current ABI was set in stone.


> Why source being available makes difference?

Because the library is a spec, they don't have to be compatible at a binary level. This is obvious on a hardware architecture level (a 16-bit processor vs a 64-bit processor) but it's true even on the same hardware under the same OS (the different representations of std::string being a famous example). But they are all compatible at the API level -- which in C++ is the source level.

So this is why I mention compatibility at the source code level.

> What is so beneficial in having different implementation of same functionality? We can ignore cases of optimized per platform implementations because std was not made for that.

The point of having different implementations is that not every program has the same needs. The idea of a standard library is that it has a bunch of commonly-used functionality that you can just use so you can concentrate on your own program. You don't have to roll your own string class -- just use the standard one. You may later find you have special needs and need to roll your own, more restrictive string that does just what you want, but in most cases people won't have to. (You can argue whether the C++ standard library accomplishes this or not, but that's a separate matter).

A good example of this is std::map which was overspecified, and thus almost never what you want. If the specification had been looser than different implementations could have chosen different solutions, and even borrowed from each other.

And per-platform optimization is exactly part of std's requirements. Different implementations for the 16-bit and 64-bit cases is an easy to use example. The compilers output a lot of intrinsics to take advantage of CPU capabilities (a common example is memcpy, but there are many). Just try to read the source code for libc++ -- it's hard to read when you aren't familiar with it because it's 1 - full of corner cases so that it works with any code that uses it but also 2 - it's full of target-specific optimizations and special cases.

> None of platforms that make sense to support now were available back when current ABI was set in stone.

Well this is true, but since it's a spec it remains source-code compatible.

I don't know if your use of "ABI" was a typo for "API" or if you really meant "ABI". The APIs are set in stone because of back compatibility (like the notorious std::map example I call out above) but use of ABI, when used by the committee, refers to the de facto binary layout of code that's already been compiled. There are few platform ABI specifications for C++; they are mainly for C, with sometimes some Ada or FORTRAN calling convention stuff. Rarely do platforms say anything about C++, and when they do they don't specify much. There is also a little ABI requirement in C++ (e.g. address of a derived class must also be the address of its own most fundamental base class) but that's for something that is reflected at the source code level.


>> What is so beneficial in having different implementation of same functionality?

> The point of having different implementations is that not every program has the same needs.

Are we still chatting about porting exactly same game product to multiple platforms? Portable code means it performs exactly the same function in an app.

It is clear that we are talking past each other. I will leave you to it.


Hard, hard disagree.

If you want to support different implementation strategies it needs to be far more piecemeal. Not all or nothing. I mean there's only 3 meaningful implementation - libstdc++, libc++, and MSVC. And they aren't wholly interchangeable!

Quite frankly if you value trying different implementation strategies then the C++ model is a complete and total failure. A successful model would have many, many different implementations of different components. The fact there are just 3 is an objective failure.


See my parallel reply: there are much more than just three, and all work with the three/four most dominant compilers these days as well as less dominant ones like EDG or Intel.


No, there are just 3 relevant standard implementations. There are numerous independent libraries that perform very similar but non-conformant functionality.

My complaint is that the C++ standards committee should, when possible, release code and not a specification. They shouldn't release a std::map spec that 3 different vendors implement. The committee should write and release a single std::map implementation. It's just vanilla C++ after all.

My proposal does not prohibit Abseil, Folly, etc from releasing their own version of map which may, and likely will, choose different constraints and trade-offs.

Rust's standard library is not a spec, it's just code. There are many, many, many crates the implement the same APIs with different implementations and behavior. Sometimes those crates even get promoted and become the standard implementation. This is, imho, a far superior approach than the C++ specification approach.


> Rust's standard library is not a spec, it's just code.

I consider this profound weakness, not a strength.

Don’t get me wrong: I recognize the benefits in the short term! But really long lived languages like FORTRAN, Lisp, C++ have benefited hugely from a spec-based standards approach adopted from other engineering practice. They have also benefited from cross-fertilization from different implementations which influenced later standard and thus each other.

This is why standards from building codes to electrical systems, to ships, manufacturing QC sampling, TCP/IP (and all the internet RFCs) and basically the entire corpus of ISO standards are spec based.

If you want to build long-lived engineered systems it’s worth learning from people who figured out a lot of the metaprocesses the hard way, some of them for more than a century ago.


We may just have to disagree on this one.

The fact that some things benefit from a spec does not mean that all do things do. Almost everything defined by the C++ committee since 2014 is awful. The specs, once published, are unable to evolve due to ABI.

The Rust standard library is soooooooo much better than C++’s. By leaps and bounds. And it continues to improve with time. C++ is far worse and far more stagnant. That’s lose/lose!

I don’t see how you could possibly claim that std::map and std::deque being a spec is a profound strength.

The fact that you celebrate non-spec implementations such as Abseil and Folly seem to me to be evidence supporting implementations over specs!

To be clear I’m talking about the standard library, not the core language syntax.


Computer programs that other computer programs use all require a detailed functional spec.

Entertainment software, not so much.

In between, you need varying amounts of spec.


> Computer programs that other computer programs use all require a detailed functional spec.

And yet most programs that are used by other programs do not provide a detailed functional spec! How curious.

Most computer programs do not a formal, detailed, functional spec. They simply are what they are. Furthermore, the type of specs we are talking about are incomplete and purposefully leave a lot of room open to implementers to make different choices. Their choices are unspecified but fully relied upon.

Hyrum's Law: With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.

std::deque has an underdefined spec such that the MSVC implementation meets the spec but is utterly worthless. And it can't be fixed because that would break the ABI.

In this thread I'm specifically talking about the C++ standard library specification and implementations. Whether other software benefits from a detailed spec or not is outside the scope of this conversation. I maintain that the C++ standards committee should provide a std::deque implementation and not a spec. Thus far no one has even attempted to argue why it's better as a spec. Womp womp.


C++'s approach also doesn't stop alternative approaches implemented in third-party libraries and sometimes those also do get added to the C++ standard library spec (many standard APIs are very close to pre-existing Boost APIs, for better or worse).

Since most standard library implementations are open source you CAN also pick components individually even if it takes a bit more effort to get all the required support cruft and avoid namespace clashes.


std::deque, a container with some quite useful theoretical properties, is completely unusable because the node size is not specifiable by the user and MSVC chose 16 bytes (I think, insanely small nonetheless).


> with the only conceivable usecase being poorly designed and implemented test fxtures.

Reproducible pseudo-randomness is a necessity with fuzz testing. It is not a poor design approach when it is actually useful.


it is reproducible within a single standard library implementation, so usable for fuzz testing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: