Hacker News new | past | comments | ask | show | jobs | submit login
Why undefined behavior may call a never-called function (kristerw.blogspot.com)
139 points by pavel_lishin on Sept 24, 2017 | hide | past | favorite | 169 comments



I have become convinced that the current screw-the-programmer interpretation of ‘undefined behaviour’ was not intended, or even imagined, by the original ANSI C committee. Within the committee's mandate to ‘standardize existing practice’, it was simply an acknowledgement that C compilers translated straightforward C code into straightforward machine code without adding safety checks, and that simple code might —​ in some circumstances on some machines — do very strange things.

In the Rationale, the committee listed places where they intentionally diverged from existing practice. They considered “the most serious semantic change” to be requiring value-preserving (vs unsigned-preserving) integer promotion. They didn't mention ‘undefined behaviour' at all.

During the standards process, Dennis Ritchie described the ‘noalias’ proposal as “a license for the compiler to undertake aggressive optimizations that are completely legal by the committee's rules, but make hash of apparently safe programs”. That's exactly what ‘undefined behavior’ has turned into. If anyone had foreseen that at the time, the reaction would have been the same: “Noalias must go. This is non-negotiable.”


> the current screw-the-programmer interpretation of ‘undefined behaviour’

I don't believe that interpretation was ever followed by anyone. In C parlance, "undefined behavior" only means that the standard does not define any specific behavior for a particular case. In other words, that particular code construct isn't valid C, but the standard doesn't mandate it should be illegal because some C implementation somewhere may have a very good reason to fill in the blank.

Therefore, anyone writing code in standard C should simply not fumble around by writing code he does not know will actually do, and those who target a specific compiler implementation are required to know what the compiler actually does in that specific case (i.e., how they've defined the behavior left undefined by the standard) and proceed to know very well what they are doing.

In short, "undefined behavior" never meant "screw-the-programmer". At most, clueless programmers screw themselves due to ignorance.


Your argument is tautologous. Any "clueless programmer" can "screw themselves due to ignorance" in any programming language. The question is how difficult it is for programmers to avoid mistakes and how aggressive the programming language is in punishing programmers for their mistakes. C is terrible by these measures.


> Your argument is tautologous. Any "clueless programmer" can "screw themselves due to ignorance" in any programming language.

It appears you're missing the fact that someone needs to be completely oblivious and very foolish to expect anything out of behavior which was intentionally left undefined. I mean "undefined behavior" clearly signals that no particular behavior should be expected. A programmer needs to be particularly clueless and specially incompetent to write code that has unexpected consequences and goes against the warnings specified not only in the international standard which specifies the language in detail but also in long-standing programming references.

A programming language is not "aggressive" just because incompetent programmers decide to go against the very specification of the programming language to write broken code. Your argument is like claiming electricity is "aggressive" just because some idiot repeatedly sticks his fingers in an electrical outlets and in the process zaps himself.


Electricity _is_ aggressive in that it does not weigh in the interests of the person and just does its thing, just like C.

You can now insult and smear on people that write undefined code in C, but that does not change the fact that undefined behaviour happens in C code and is a source of problems. These problems are worth a solution besides swearing and derision.


> A programmer needs to be particularly clueless and specially incompetent to write code

Or just distracted. Even the best programmers make mistakes. Which is why programming language designers should attempt to make it easy to do the correct thing, and hard to do the wrong thing.


> Or just distracted.

Could you provide a single example showing how distraction alone can lead a competent programmer to write C code that relies on undefined behavior?


> A programmer needs to be particularly clueless and specially incompetent to write code that has unexpected consequences

No. That's called "a bug".


> No. That's called "a bug".

You need to actively go against the most basic aspects of the programming language you're using to force undefined behavior into your code. So, it's not merely a bug. It's the direct result of incompetence, and one which no programmer can pin on his tools or even the programming language.


> You need to actively go against the most basic aspects of the programming language you're using to force undefined behavior into your code.

Adding two ints together is potentially undefined behaviour.


AFAICT, the main rationale for undefined behavior in the original instance boils down to traps: any operation that could trap is undefined. In particular, this seems to motivate why signed integer overflow is undefined as opposed to implementation-defined. It should be pretty clear why it's a good thing that potentially-trapping instructions have undefined semantics in those instances (or, perhaps more accurately, have semantics that have large leeway for when, where, and if the trap is triggered).

Undefined behavior is generally nowhere near as bad in practice as it's often made out to be in practice. It's mostly a case of taking code that is manifestly wrong and making it manifest wrongly in different ways, and when you look at how and why compilers use the undefined behavior to optimize, it's hard to actually object to it. The two main counterexamples are strict aliasing (although it should be noted that most compilers will use the strict aliasing rules only if it failed to figure out aliasing by other means, so trivial things like int_var = (int)&float_var don't end up being deleted as being nonsensical) and signed integer overflow (note that wraparound semantics usually are as equally bad as undefined behavior, but making it be undefined makes it challenging to check if the operation would overflow).


Not just traps... systems level code often relies on what is by the standard "Undefined Behavior" but is in fact well defined for that implementation, something the standard allows for very good reason. At the end of the day interacting with hardware sometimes requires doing things that "No Sane Programmer" would ever do normally. Most of undefined behavior is a nice way to say that the standard is not going to try to define something that is probably an artifact of the implementation. Compiler writers can however use and abuse the fact that there is no constraints. MSVC will prune code with undefined behavior. Clang may call the never-called (only if not static or in an anonymous namespace), etc. Undefined behavior is only undefined by the standard... in most cases it has very well defined results.. for a specific implementation and hardware set.


The original ANSI C committee had no idea about modern optimization pipelines. If people had continually pushed back against undefined behavior back then, there's a good chance that by 2017 the result would have been that C would be dead, replaced by a language that allows for modern optimization techniques.


As others have said, if C were dead by now that would be great.

But I'm pretty skeptical, to be honest. I've been working on one or another high-performance C or C++ program for most of the past 20 years. I can't ever remember getting a really substantial speed improvement from upgrading the compiler for an important platform, because anything the old compiler did badly that really hurt program performance had already been avoided, either before or after seeing it in profiling results. I'm sure that if you took C code developed on a modern compiler and compiled it with a 90s compiler, it would be slower. But I doubt the software ecosystem would actually be drastically slower if optimizer technology hadn't advanced significantly since 1995, and everything had been developed under that constraint. And I don't think every single advance in optimization since 1995 is dependent on degenerate transformations of undefined behavior.


You're the last person I'd have expected to make that sound like a bad thing.

While benchmark games played a part in the modern ‘undefined behavior’, I'm not so sure that it would made much difference to adoption of the language. Consider Linus Torvald's well-known rant as a point in the opposite direction.

In the universe where ‘undefined behaviour’ had been clearly specified as ‘what you write is what you get’, C might have gone to consistent use of optimization-enabling annotations, following ‘const’ (and later ‘restrict’). Along those lines, ‘volatile’ was ANSI's other big mistake, as creating it broke existing code; I now think that should have been the default, with fragile-optimizable status being explicitly indicated by extending the use of ‘register’ keyword.


It's not just benchmark games. It's people's real-world code.

Look at how often folks on HN complain that the Web platform is useless because JS is slow. C without optimizations is just as slow if not slower. Compiler optimizations are so good that people don't realize how much they rely on them.

Linus was wrong when he complained about the compiler treating null dereference as undefined behavior. This is a very important optimization, because it allows the compiler to omit a lot of useless code in libraries. Often times, the easiest way to prove that "obviously" dead code can never be executed is to observe that the only way it could be executed is for null to be dereferenced.

Opt-in optimization keywords wouldn't scale to the sheer number of optimizations a modern compiler framework performs. Restrict hasn't been a success because it's too subtle of an invariant. It's the kind of thing compiler authors, not application developers, should be thinking about.


This is an important point, and it's why the "C should die" crowd is hard to take seriously. They've even started labeling people who use C as somehow morally suspect, as if we're bad people for choosing to use an unsafe language. We're knowingly putting people in danger! Right.

It's strange that the word "unsafe" has tainted people's thoughts so dramatically. Like calling torrenting music "piracy."


I'm not endorsing C. Don't use C for anything you need to be secure.


I need Emacs to be secure. It's written in C. It interfaces with the internet.

Ditto for Bitcoin. It's the basis of a new financial system. The core software is written in C++.

Same for Linux. C.

Prejudice generally isn't helpful, and it's a bit strange that you can recognize C's merits while also decrying it.


I haven't been "recognizing C's merits" either. In fact, the real reason behind this problem is that C is not type safe, so optimizations (such as the one in this very article!) that are perfectly fine to do in type-safe languages are not possible to do in C without aggressively "exploiting" undefined behavior.


If one don't follow the High Integrety, CERT, MISRA standards, validated with tools like LDRA.

Or at very minimumm compiling with warning enabled as errors, with a continuous build breaking on static analysers errors, then yes.


Look at how often folks on HN complain that the Web platform is useless because JS is slow.

I haven't actually seen anyone complaining about that. Do you have any links?

There are some specific complaints like: JS can't do 64-bit arithmetic or SIMD; but that's only really needed for games and scientific computing, which don't need to use JS. Or that JS is single-threaded; that's a fundamental feature of its design, nothing to do with optimisation.

C without optimisations is just as slow if not slower.

Nobody's talking about taking away all optimisations, just not trying to do extreme optimisations that exploit undefined behavior (or rather, assume it can never occur).

Plenty of C compilers worked that way in the 90s and performance was perfectly acceptable (on hardware with a fraction of the speed and memory of today's computers and phones).

Modern C++ probably relies on a higher level of optimisation, but that's another story.


> Nobody's talking about taking away all optimisations, just not trying to do extreme optimisations that exploit undefined behavior (or rather, assume it can never occur).

Those "extreme" optimizations are usually just surprising behavior that emerges from perfectly reasonable optimizations. For example, assuming that code that follows null dereference is dead is important.

> Plenty of C compilers worked that way in the 90s and performance was perfectly acceptable (on hardware with a fraction of the speed and memory of today's computers and phones).

You can get that experience with GCC -O0. Do you want to run code like that? I don't, and neither do customers.

People who don't work on modern compilers often think that there is a subset of "simple" optimizations that catches "obvious" things, and optimizations beyond that are weird esoteric things nobody cares about. That isn't how it works. Tons of "obvious" optimizations require assumptions about undefined behavior.


If I could get that on -O1 or -O2, or maybe -Os, I would very happily do so. (But I don't actually know what optimisations those entail without poring over the manuals.)

You're implying that 90s compilers had no optimisations, which is incorrect.

Why is this a hot topic now, and why was it not a hot topic 10 or 15 years ago?

I suggest that something changed in the interim, and that what changed is the addition of dangerous optimisations. I'm not sure where it all started but strict aliasing in GCC is a potential candidate.

As others have pointed out, GCC and Clang seem to have by far the most horror stories, even though they don't actually generate the fastest code. I imagine that's mostly because GCC and Clang are so widely used, though.


I don't see how it's an optimization to assume, at compile time, that a static pointer that is always null is actualy aiming at the function NeverCalled. Why not pick some other function, like one which prints a diagnostic and calls abort?


It's not aiming at the function NeverCalled; it's aiming at EraseAll.

As the article states, if a local static function has exactly one assignment to it, then it can be an important optimization to assume that it will always have that value. Imagine that it's some kind of "DebugPrintf(...)" function that, in release builds, is always set to a no-op that does nothing before being called. You would definitely want that indirect function call to be inlined to nothing in release.


For debug functions that completely disappear in release builds, we have inline functions with conditionally empty bodies or old-school macros.

It is a (decades ago) Solved Problem.


But sometimes these checks just seem to end up entirely removed, and that is just not OK: I have been a developer working on performance constrained system software in low-level programming languages (including heavily optimized games written in C++) and this undefined behavior idea has gone way way too far. I can always make code faster by removing checks I don't need manually: trying to compare the small gains here with "let's just use node.js lol" is dishonest.

C++ states stuff like references must not be NULL and "this" must not be NULL, but in the real world it is possible for a NULL pointer to be dereferenced into a reference and for a method to be called on that reference and for the method to complete execution without the app crashing. Yet, some C++ compilers are now insisting that "this == NULL" checks (which is the most hilarious case, but simple "&ref == NULL" are the same) and all the dependent code be entirely removed, hamstringing runtime safety and sanity checks.

What works for me is when the compiler says "for this to happen the code would have had to crash"; but what does not work for me is "for this to happen the code would have to be violating the specification" as the entire point of NULL checks in a program was always to check for invalid execution and mitigate its effects :/ and yet since this code has never crashed on any reasonable C++ compiler the only way to check for it is to add comparisons that are now being removed under some misguided assumption that the code would fail at runtime.


These optimization opportunities aren't small gains. They have big consequences, for example when they cause code that would not be vectorized to be vectorized. Again, compiler authors don't add UB optimization for the fun of it. Patches to add theoretical optimizations that don't actually move the needle are routinely rejected from LLVM and GCC (as they should, because optimizations slow down compilation, so they need to pull their weight). Rather, they add UB optimization when code is shown to benefit, often the code that people come to their bug trackers with complaining that it doesn't optimize to what they expect.


If there were a simple and reliable way to say "make the program crash if we hit any of this UB, rather than optimising it completely away" I think that would make a lot of people happy.


LLVM already does this as much as possible. Look at how Clang inserts "ud2" instructions in dead code paths.


Is there a way to use that to address the "NeverCalled" example?

I feel like there's a huge disconnect here. Even after the strange behavior is explained, some people say "wow, I never ever want that behavior, how do I reliably avoid it?" but others respond "there's no problem, you're just using it wrong".

Is there really no way to satisfy both sides?


The proper place to put those checks is before the undefined behavior would be invoked, not after.


What language do you think would have replaced it?

It seems to me that C had already won as the de facto systems language, long before any of these "modern optimisation techniques" cropped up.

Optimisations that make it harder to use the language safely are downright dangerous in my book.


I would have been happy with Modula-2, Ada or Object Pascal as basis.

But better yet would have been Modula-3 or Active Oberon.


I hear very good things about Ada, aside from its unfriendly old-fashioned syntax.


Some of us do enjoy verbose explicit syntax instead of hieroglyphs. :)

Quite helpful when maintaining unknown code in big corp projects.


It would have been much more prudent if the committee defined the behavior in a way amenable to optimization, rather than asking for a blank check.


They did. "Undefined" doesn't mean "unconsidered by the committee", it means "do what you must for optimization".


And would that be such a bad thing?

How come we have languages like Rust that achieve almost the same speed while maintaining dramatically better safety, anyway?


Because type safety makes a lot of optimizations sound. C and C++ have to use undefined behavior rules to achieve a lot of optimizations that type safe languages can more easily perform. In fact, the optimization that the article is complaining about is really only a problem because C is not type safe.


So that's my point. If C was those few percentage points slower, and died, then we'd have better languages sooner.


Sounds like win-win to me!

C would be dead.

We will have a sound low-level language suitable for optimization.

Now, if we throw in C++ eradrication...


> I have become convinced that the current screw-the-programmer interpretation of ‘undefined behaviour’ was not intended, or even imagined, by the original ANSI C committee

Does it matter today what the original committee did or did not intended? I might even invoke the old adage "the road to hell is paved with good intentions"; the best would be to learn the lessons from their failures when designing new languages.


This, just so much this!

I've been longing for a C compiler with a "sane optimizations only"-switch like forever. I'd gladly give up on the additional couple of per cent speed improvement obsessive compulsive compiler writers managed to eke out by ignoring the source codes' obvious intentions and defending it with "but technically it's undefined behaviour"!


Everyone posts a comment like this every time undefined behavior surprises someone. But the reality is that the reason why C remains alive is that compiler writers have managed to make it fast. It is not "a couple of percent": these kinds of optimizations can make an enormous difference when, for example, they're the difference between vectorizing a loop and not doing that.

Compiler writers are not "obsessive compulsive". They respond to the demands of their customers. Frequently, the reason why these optimizations exist is that someone filed a bug asking "why doesn't the compiler do this seemingly-obvious optimization?" Often, the only reason these seemingly-obvious optimizations work at all is by exploiting undefined behavior.


Everyone posts a comment like this every time undefined behavior surprises someone.

Yes! Because it's terrifying, and the explanation is not reassuring.


The craziness for C optmizer tricks is in many cases plain phsycologic.

As an example, it doesn't matter that it does in 5ms what it takes to do in 15ms without the optimization, if in the end anything less than 20ms doesn't have visibile outcome.

Actually there is a remark from a famous compiler write (female), which I don't recal the name now, that C brough back into the stone age what the optimizers were already able to do for Algol family of languages.

This is visible in PL/8 research paper, a compiler that already used compiler optimization passes in the 70's, nowadays mostly common on LLVM.


> As an example, it doesn't matter that it does in 5ms what it takes to do in 15ms without the optimization, if in the end anything less than 20ms doesn't have visibile outcome.

That hasn't been true ever since power consumption started mattering.


If it really mattered at that level, people would have already stopped trying to do Web based OS or applications for mobile phones.

Also Apple, Google and Microsoft would expose the complete set of mobile and watch OS APIs to C and C++ developers, which they don't. They are kept down to the bare minimum for the upper OS layers.


If you mean takes 15ms vs 5ms to compile yes. However if you're talking executions speed those low hanging fruit optimizations were picked in the 1980's. What we're talking about here are optimizations that speed up some obscure benchmark on a disused architecture by 5%. Bonus no telling if performance is worse on next years silicon.


Could it be Fran Allen, the one with the Turing award? But I can't find the comment with google, so maybe not.


Yes it was her.

The statement is actually part of the "Coders at Work" interview, where she explains why "how C has grievously wounded the study of computer science." from her point of view.

http://www.codersatwork.com/fran-allen.htmlhttp:


Can you give some examples of optimization passes in clang/LLVM that you think are unnecessary?


I didn't say they aren't necessary, rather that the idea was already used in PL/8 during the 70's, a type safe safe systems programming language.

https://courses.physics.illinois.edu/cs426/fa2017/Papers/pl8...

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.87.4...


So now the compiler should also statically figure out how long some code is going to take to run?

Chances are I want the compiler that gives me a binary that does its job in 2 hours rather than 6.


There's some discussion about that here https://blog.regehr.org/archives/1287


How do you know it would only be a few percent? What if such a switch made your code ten times slower? Without actually reading the clang code and seeing what all the optimization passes do, I don't see any a priori reason for assuming it would be a few percent rather than 10x.


What would that look like? How would this prevent me from returning a pointer to a stack frame that no longer exists, just for example? Clearly undefined behavior, but C doesn't seem capable of expressing this safely, with or without the compiler's help?


It wouldn't make it safe; it would make it so that the consequences of returning that pointer are just the same as if you had written an assembler routine returning that pointer. On all common hardware, that would be none if the pointer is never used; on most common hardware, it would be none if the pointer is never used in a call at least equally deep — but anyway, you'd get whatever the machine does, and if you don't like it, that's your problem. In pre-UB C, the ‘spirit of C’ that C89 was intended to maintain was that you get what you wrote.

By contrast, modern ‘undefined behaviour’ means that if you return that pointer — even if it is never actually used — the compiler can throw away all the code leading up to that point (including external side effects) on the grounds that it “can't happen”. You get much less than what you wrote.


Are you sure about that? Can you point to a clause in the standard?

What may certainly cause the behavior you describe, though, is creating a pointer to an out-of-bounds element of an array.


How would this prevent me from returning a pointer to a stack frame that no longer exists, just for example?

It doesn't, but the outcome would be predictable: the pointer will always be pointing there.

I assume you mean by "stack frame that no longer exists" something like returning a pointer to a local variable; what that would do is return the address where the variable was --- the memory address still exists, so "no longer exists" is somewhat of a misconception here.

What "sane optimisations" mean is that the compiler won't e.g. decide to remove all the code following any call to that function, just because you "invoked UB".


Is that really such an improvement?


Maybe you won't quickly find a practical and non-contrived application for this specific case, but there are plenty of others. UB breaking buffer overflow checks springs to mind.


This is literally a religious argument. No sane person would consider this acceptable behavior if not for the fact that there is a holy text ("the standard") that says it's acceptable. Well, it's not acceptable. It is no more acceptable than, say, a car that explodes if you push the wrong button at the wrong time, which would be clearly unacceptable even if there were a document blessed by a standards committee that said otherwise. Faulty code can literally make things blow up in today's world, so there is literally (and I really do mean literally) no difference between these two scenarios. It is truly a sad reflection on the state of our profession that we are even spending time arguing about these things instead of fixing the standard so that the language it defines is actually useful for writing programs rather than just a source of material for games of intellectual one-upsmanship, to say nothing of myriad real-world problems. You'd think that decades of security breaches caused by buffer overflows would make people think, "You know, it's 2017. Maybe array dereferencing without bounds checks is a bad idea even if it does let my code run a little faster." Alas.


> You'd think that decades of security breaches caused by buffer overflows would make people think, "You know, it's 2017. Maybe array dereferencing without bounds checks is a bad idea even if it does let my code run a little faster." Alas.

And there are dozens of languages that will let you sacrifice that bit of speed for some safety.


If you meant to imply that this is not a problem because there are other languages one can use, I disagree. C holds a unique position in the computing world. There is an enormous corpus of C source code out there, and more is being written all the time notwithstanding that C as currently specified in not a sane language. So what C compilers do matters whether you like it or not.


And why do you think that is the case?


Inertia. It's very hard to replace infrastructure.


The problem here is that the behavior is in fact defined. It is not defined by ISO C, so it is "(ISO C) undefined behavior". But requirements for C program behavior do not only come from ISO C. It takes more requirements than just those from ISO C to make a complete implementation.

On "modern", Unix-like, virtual memory platforms, we have an understanding that the null pointer corresponds to an unmapped page, and that this is the mechanism for trapping null pointer dereferences (at least ones that don't index out of that page).

A compiler which interferes with this by replacing valid null pointers with garbage at point where they are dereferenced is running afoul of this platform requirement.

Look, the generated code is not even pretending to use the value of the null variable. We cannot reasonably call this machine code a translation of the program's intent into machine code.


If you make one small change to this file you can cause clang and gcc to both prevent this from compiling if you are using warnings.

  namespace {
    void NeverCalled() { Do = EraseAll; }
  }
or marking NeverCalled as static itself.

Results in warning: unused function NeverCalled.

In general this is best practice for functions defined and used in a single translation unit.


That's a vital part of the setup.

You could hypothetically link this compilation unit against another unit which included:

  void NeverCalled();
  struct A {
    A() { NeverCalled(); }
  };
  A a;


You don't even need C++, you can just use C and __attribute__((constructor)).


Which is a GCC specific extension and not ANSI C.


Interesting choice of code for a demo. I wonder if anyone hosed themselves running this.

It seems gnu rm has "-preserve-root" as a default, but that's not guaranteed to be on every rm.


but that's not guaranteed to be on every rm.

Not even the GNU Coreutils one, if compiled with this compiler.


If it's run as an unprivileged user rather than root, it won't be able to delete the whole system. I'm pretty sure that means it won't do anything, but I don't feel like trying it.


>I'm pretty sure that means it won't do anything

Assuming an rm without the "-preserve-root" default, it would remove everything that it had permission to. So, eventually, for example, it would wipe your home directory.

I suspect this to be the case for OSX, Alpine Linux (or other distros that use busybox), probably some of the BSD distributions, etc.


  $ uname -sr
  FreeBSD 11.1-RELEASE

  $ rm -rf /
  rm: "/" may not be removed
No idea about other BSDs.

Busybox's rm is indeed happy to nuke /.


Also:

  $ uname -sr
  OpenBSD 6.1
  
  $ rm -rf /
  rm: "/" may not be removed


The latest POSIX standard actually requires this behavior:

If […] an operand resolves to the root directory, rm shall write a diagnostic message to standard error and do nothing more with such operands.

Source: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm...


One of the advantages of using Docker! You can safely run this code multiple times and play with it without affecting your machine.


Sure, but you could also replace "rm -rf /" with "id" or something else non destructive. Somebody is going to copy paste that snippet and have a bad day.


If someone blindly copies, pastes, compiles, and runs a random snippet of code from the internet that is specifically described as producing weird and unexpected behavior, then they deserve the harsh lesson they're about to learn.


Just because it's a harsh, cruel world doesn't mean you should be cruel.


I normally have some measure of sympathy for people who blindly run code that is unexplained or under-explained (e.g. the many "curl | bash" installers). But blindly running code that is specifically called out as having weird and unexpected behavior is like driving your car right through the bright yellow "bridge out" sign that's blocking the bridge.


It doesn't mean he should be cruel, it just means he is statistically more likely to be cruel.


I usually use "cowsay" for demonstrating that arbitrary code execution is possible. Less destructive, and more entertaining when someone actually tries to run the code. :-)


I believe "rm -rf /" illustrates the point quite well, better than id. I'm more in the common sense is more common than people give credit and in the situations where that is not the case then rm has built-in protections.

It literally says:

> That is, the compiled program executes “rm -rf /”

The next line following the code. It's not tricking anyone.


Gnu rm has built in protections. Not sure that's the case on OSX, Linux distributions that use busybox, etc.


Do you use Docker to run literally every piece of code though?


Thanks for whoever down-voted but yes, I do.


From the HN guidelines:

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.


I wrote in C for 20 years and now code in Swift (which I love). Reading this reminds me that Chris Lattner created Swift because he wanted a language to do more aggressive compiler optimizations than he could do for C.


rm -rf /

Using this type of code to show unintended consequences is itself a hotbed of unintended consequences!

Just put in code that prints "pwned" instead of running code that would delete someone's system or home folder.


Do any up-to-date operating systems actually run this without the --no-preserve-root flag?


btw, this would work with slightly older rm from GNU coreutils:

  rm -rf --n /
any GNU long option (getopt_long) for any command can be shortened as much as there isn't any ambiguity. And hence the above (this is fixed in latest GNU coreutils release).


The busybox rm does.


Anyone who would compile and run random code from the internet without thinking about whether it's subtly malicious (let alone overtly destructive) has a lesson to be learned that's probably worth the data on their hard drive. If they don't have backups, that's two lessons.


Agreed. But, that's not a reason for telling people to run code that would delete the entire drive.


If the compiler can take advantage of undefined behavior to optimize a program, what is keeping the compiler from also warning you that it is doing so?


Because the useful types of optimisation around undefined behaviour aren't: "we can prove that your program definitely contains undefined behaviour, so we'll compile it into 'rm -rf /', mwahahaha" but rather: "if property P about your program is false, then it would definitely contain undefined behaviour, therefore P must be true" followed by using P to do useful optimisations.

This applies even when your original program contains zero undefined behaviour. Warning here isn't useful.


Because, after inlining and macro expansion and other passes, you'd get warnings about everything under the sun and they would be useless.


"I'm statically assuming that a pointer that can never be anything other than null actually refers to NeverCalled" is definitely worth a diagnostic.


The compiler doesn't know the pointer isn't set, only the linker may know that. So you'd have to put a whole lot more smarts into multiple tools to get that warning.

I'm not convinced it is useful.


The point of the warning is precisely that the compiler is making a dangerous assumption without proof.

More smarts is needed only to eliminate false positives occurrences in the warning.


> the compiler is making a dangerous assumption without proof.

The whole issue is that, from the compiler point of view, it has a proof! It can prove from the language rules that the pointer can only have NULL and EraseAll as its value; since a call through the NULL pointer is invalid, at that line the only value left is EraseAll; QED.

It might not be the proof you wanted, since you disagree with the premises, but it's still a valid proof.


It isn't a valid proof, because it's perfectly possible that the variable has a null value and that the call is invalid.

Detection of that null value is already there and essentially free of charge.

The implementation is going out of its way to prevent an instance of undefined behavior from being detected, without providing a useful, documented extension in its place, and in a situation when the detection costs nothing.


If the variable has a null value and the call is invalid then the compiler isn't required to compile it to anything specific; this includes the idea that the compiler isn't required to compile it to a jump-to-address-zero.


> the compiler isn't required to compile it to anything specific

That might be true if the ISO C standard were the only source of requirements going into the making of that compiler; it isn't.

There are other issues.

Obviously, the compiler is in fact compiling it to something very specific. It's not simply an accident due to the situation being ignored that the indirect call gets replaced by a direct jump. The translation is deliberate.

From ISO C: Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

This is definitely not the result of "ignoring the situation completely"; ignoring the situation completely means translating the code earnestly and subsequently the translation doing the indirect jump attempt through a null pointer. That's what it means not to do anything specific. It's not terminating the translation, so it's not the third kind of behavior. So it must be "behaving during translation or programing execution in a documented manner characteristic of the environment". I don't see how this is a characteristic of the environment; nobody can claim with a straight face that this is environmentally dictated!

Also, the undefined behavior never actually occurs. A pointer variable may be null; null is a valid value. Undefined behavior only ensues when the attempt is made to dereference. That's not actually happening in the program; the translation is devoid of any attempt to dereference null. The idea that the null pointer dereference caused the translation and the subsequent call to that function requires a perverted view of causality in which later events cause earlier events.


There are two possibilities the compiler is faced with, that the function pointer is null or that the external function was called and the function pointer is set to a concrete function.

Since the null pointer option would lead to undefined behavior at runtime (no one is confusing cause and effect here) the compiler can ignore this case.

That leaves only one choice, and that's the result of the compilation.

It can be surprising but it makes sense.


It depends on your definition of dangerous.

And eliminating false positives in warnings is hugely important. Too many important warnings (more important than this one) are ignored today because people get desensitized as a result of so many false positives. Let's not add more for trivial issues like this please.


Warnings are ignored because people don't use -Werror and then fine tune which warnings warnings they want and don't want to see.


Warnings are ignored because it's too hard or not possible to configure the compiler to only issue the warnings a particular developer finds useful and actionable.

Adding more warnings, especially for silly corner cases like the example we're discussing, is not the solution.


> Warnings are ignored because it's too hard or not possible to configure the compiler to only issue the warnings a particular developer finds useful and actionable.

That's why you don't do that based on the whims of an individual developer. Simply have a "no warnings" policy. Then the project decides what is enabled and what isn't.

A commit must not introduce warnings.

If a code change triggers a warning, it must be accompanied with a disabling of that diagnostic, which must then pass review so that it is peer approved.

If the warning cannot be disabled, the change must be reworked.

This "silly corner case" is not silly at all; it reveals a dangerous translation strategy in the compiler. The appropriate treatment isn't the issuance of a warning; rather, this translation strategy should be turned off unless explicitly requested by an exotic code generation option. (And then it can be applied without any warning.)


If your warnings policy isn't useful for developers it isn't a good policy.

And the translation strategy understand discussion here isn't dangerous.


I've been playing around with Clang and I can't find any flags that avoid this problem, other than -O0.

Oh, but if you add a second "NeverCalled" function that sets the pointer to something else, the optimization is skipped and you get a bus error at runtime.

No flags I can find, not -Wall -Wextra -Werror, warn me at compile time that anything unusual might be going on.

This is weird. I understand the explanation of the optimization, but a) isn't this rather unreasonable behavior? and b) couldn't there at least be a flag that skips this optimization, without having to disable all optimizations?

Maybe this UB propagation / dead code elimination idea is so deeply ingrained in Clang's design that it would simply be impossible to selectively remove it. If so, that seems very unfortunate.


I suspect this is another critical point:

as NeverCalled may have been called from, for example, a global constructor in another file before main is run

clang doesn't analyse across module boundaries --- even when it theoretically could --- so it doesn't know for certain that NeverCalled() is indeed never called.

Throughout the years I've grown increasingly displeased at how compilers handle even trivial cases like this[1][2][3], and in particular their treatment of UB[4][5][6][7]; as one of the comments on the article implies, UB was unlikely intended by the standard's authors as a "you should let the most inane things happen" but more as a "do what makes the most sense".

A more sensical approach to analysing this program, e.g. as employed by a human, would be to see that NeverCalled() is the only function that can write Do, but then further ascertain whether it is actually called. Since this is the entire program, it's trivial to see that it's not, and thus that possibility should also have been removed from the set of possible values for Do. Thus, Do can neither be EraseAll nor 0 --- so a "contradiction" has occurred, the code is likely bugged or the programmer intentionally wants the UB, and the sane choice at this point would be to forget about trying to optimise and just generate the obvious code. "I can't figure out how to optimise this, so I'll do the simplest thing that works."

The question then becomes, why can a human see something so straightforward but the compiler can't? I think that's the deeper issue here with how the compilers like gcc/clang today work --- they're too opaque and complex, and their authors take The Holy Standard as gospel while ignoring the practical realities of their decisions.

Unfortunately a lot of programmers have gotten the notion that they can rely on the compiler to do "amazing" optimisation, and therefore they can write horrible code, leading to horribly unintuitive and "overly aggressive" optimisation like this.

I'm sure that me, along with quite a few others, have some ideas for how to make a C/C++ compiler which is both powerful in optimisation and code generation, but more predictable and "obvious" in terms of UB. Unfortunately, I'm also sure that we don't have the time to do it.

[1] https://news.ycombinator.com/item?id=15006090

[2] https://news.ycombinator.com/item?id=9397924

[3] https://news.ycombinator.com/item?id=15188416

[4] https://news.ycombinator.com/item?id=11147598

[5] http://blog.metaobject.com/2014/04/cc-osmartass.html

[6] https://news.ycombinator.com/item?id=7960219

[7] https://news.ycombinator.com/item?id=9809885


  more sensical approach to analysing this program, e.g. as employed by a human, would be to see that NeverCalled() is the only function that can write Do
It is not only non-trivial but also impossible (even) for a human to see if NeverCalled is ever called. Since NeverCalled is visible to other translation units, it can be called, for instance, by a global constructor before main is called.

Adapting thinkings such as

  "I can't figure out how to optimise this, so I'll do the simplest thing that works."
will disable the compiler from many non-trivial optimizations. Consider eliminating a branch predicated on `w < w + c`: this expression can be, a check for signed-integer-overflow, coming from a macro-expansion and expected by the programmer to be eliminated, a result of constant-progagation, or a result of other optimizations.

Saying you have ideas but don't have time to make a better compiler only makes you an armchair compiler-writer.


Since NeverCalled is visible to other translation units, it can be called, for instance, by a global constructor before main is called.

The compiler is also doing the linking in this case, being given the complete command line, and it very much knows all the "translation units" which will be present, and none of them call that function.

Consider eliminating a branch predicated on `w < w + c`: this expression can be, a check for signed-integer-overflow, coming from a macro-expansion and expected by the programmer to be eliminated, a result of constant-progagation, or a result of other optimizations.

A bit of range analysis, like what an intelligent human would do, suffices to determine whether that expression will be a constant. If all the terms of the expression are constant, then so will the result be; and the sane thing to do, in the face of incomplete information, is to assume that a variable can take on any value.

https://en.wikipedia.org/wiki/Value_range_analysis


> The compiler is also doing the linking in this case, being given the complete command line, and it very much knows all the "translation units" which will be present

Only in the static linking case. In the dynamic linking case, part of the linking is done by the linker called from the compiler, and part of the linking is done by the dynamic linker every time the program is executed.


Other UB links:

"These people simply don't understand what C programmers want": https://groups.google.com/forum/#!msg/boring-crypto/48qa1kWi...

"please don't do this, you're not producing value": http://blog.metaobject.com/2014/04/cc-osmartass.html (also in the above list, but why not have it twice?)

"No sane compiler writer would ever assume it allowed the compiler to 'do anything' with your code": http://article.gmane.org/gmane.os.plan9.general/76989

"Everyone is fired": http://web.archive.org/web/20160309163927/http://robertoconc...


> Since this is the entire program, it's trivial to see that it's not,

Is it really the entire program, in the presence of things like LD_PRELOAD and global constructors? What prevents a library loaded by LD_PRELOAD from calling NeverCalled() before main() starts?


That's a good point --- and one that's easily countered with "what prevents something eventually called from a global constructor from even modifying the code and calling arbitrary functions?"

In other words, programs do not exist in a void and a compiler can never predict what other things will happen in the runtime environment, especially in the not-uncommon situation of modules written in other languages, so compilers should not be making risky assumptions with this partial information.


> "what prevents something eventually called from a global constructor from even modifying the code and calling arbitrary functions?"

Wouldn't that be all sorts of undefined behavior, however? While LD_PRELOAD and constructors are well-defined, calling arbitrary non-exported functions is not (the functions might not exist due to inlining, might exist more than once, might have an alternate ABI, and so on). Modifying the code has the same problem; not only is what the compiler generates unpredictable, but also the compile code might depend on its exact instruction sequence (using code as data, for instance, or jumping into the middle of an instruction sequence through a computed pointer).

The compiler output does not exist in a void, true, but there is a "contract" that both the compiler and the runtime environment should follow. Things like "the first argument to a function will be in the x10 register" and "the compiled code will only be entered through an exported function or a function pointer". Without that "contract", neither would be able to do their work; for instance, how would a compiler generate code if said code could be entered absolutely anywhere?


I assumed this was a side-effect of devirtualization. Obviously indirections are slower, so if the compiler can look at a dynamic call and realize that there's only 1 possible function it could be calling right there, that's a win. Only my 2c


Exactly. This isn't "fuck the programmer if he fucks up," it's "let's try to do really good optimizations."

It's really nice to be able to use abstractions that cost nothing because the compiler is smart. In this particular case, you might have a function pointer that exists for future expansion, but which currently only ever holds one value. In a case like that, it's really nice if the compiler can remove the indirection (and potentially go further and do clever things like inline the callee or do cross-call optimizations).

The other piece of this puzzle is straightforward data flow analysis. The compiler knows that there are only two possible values for this function pointer: NULL and EraseAll. It also knows that it can't be NULL at the call site. Thus, it must be EraseAll.

For every person complaining that the compiler is screwing them over with stuff like this, there's another person who would complain that the compiler is too stupid to figure out obvious optimizations.

I'm very much in favor of making things safer, but I don't think avoiding optimizations like this is the answer. C just does not accommodate safety well. For this particular scenario, the language should encode the nullability of Do as part of the type. If it's non-nullable, then it should require explicit initialization. If it's nullable, then it should require an explicit check before making the call. The trouble with C isn't clever optimizers, it's that basic things like nullability are context-dependent rather than being spelled out.


> It also knows that it can't be NULL at the call site

Ah, this is obviously some strange use of the word "can't" that I wasn't previously aware of. Or possibly of "be" or "at".

The pointer clearly is NULL at the call site. Observe: http://lpaste.net/358687. Hypotheticals about the program being linked against some library that that calls NeverCalled are just that, hypothetical. In the actual program that is actually executed, the pointer is NULL.

In what sense is the function pointer "not NULL", then, given that – in what one might call the "factual" sense – it is NULL?


> In what sense is the function pointer "not NULL"

If the pointer is NULL, dereferencing it destroys the universe. If the universe is destroyed, the program never existed. Therefore, in any universe where the program exists, the pointer is not NULL. Q.E.D.

Exercise 1: Propose a less parochial definition of universe that doesn't lead to colorful threats from major stakeholders.


"Can't" here means that your program is not well-formed otherwise, and the compiler assumes well-formedness.

I assume you don't like that, but I wonder if you'd apply that to other optimizations? For a random example:

  int x = 42;
  SomeFunc();
  printf("%d\n", x);
Should the compiler be allowed to hard-code 42 as the second parameter to printf, or should it always store 42 to the stack before calling SomeFunc(), then loading back out? SomeFunc might lightly smash the stack and change the value of x, after all.


Hardcoding 42 as the parameter to printf here is far more defensible for several reasons. Here's one: the value actually is 42, and assuming that it continues to be 42 doesn't require the compiler to hallucinate any additional instructions outside this compilation unit.

There's a difference between assuming that a function like SomeFunc internally obeys the language semantics for the sake of code around its call site (this is the definition of modularity), and assuming that because the code around the call site "must" be "well-formed" this allows you to hallucinate whatever code you need to add elsewhere to retroactively make the call site "well-formed" (this is the definition of non-modularity).


What's the difference between assuming that a function you call will obey the language semantics, and assuming that the function that calls you will obey the language semantics? That's the only difference I can see.


> assuming that the function that calls you will obey the language semantics

That's not what I said.

What the compiler is doing in this NeverCalled example is observing: - that the code in the current compilation unit is not "well-formed", but - that the compilation unit can be "rescued" by some other module that could be linked in, if that other module did something specific, and therefore concluding that it should imagine that this other module exists and does this exact thing, despite the fact that such module is in fact entirely a hallucination.

This is very different from simply assuming that a thing that in fact exists really does implement its stated interface.

Here's a different example:

  #include <stdio.h>

  typedef int (*Function)();

  static Function Do;

  static int Boom() {
    return printf("<boom>\n");
  }

  void NeverCalled() {
    Do = Boom;
  }

  void MightDoSomething();

  int main() {
    printf("Do = %p\n", Do);
    MightDoSomething();
    return Do();
    printf("after Do\n");
  }
In this case, it is possible that MightDoSomething could call NeverCalled, and that's one way this module could rescued from not being "well-formed". Should the compiler assume that MightDoSomething calls NeverCalled at some point then? No, that's absurd. There's nothing about the "void()" function interface that obliges such a function to clean up after you if write code that dereferences a null pointer or divides by zero.

We trust that a random void() function won't smash the stack and overwrite local variables, because that's a reasonable interface for a function to have. That's composable. That's different from expecting it to do "whatever it takes" to fix local undefined behaviour.


When you say "its stated interface," are you referring purely to the prototype, or are you referring to documented behaviors, or what? Because it seems reasonable to me for a function with no parameters to have prerequisites before you call it, and it seems unreasonable to say that it must be valid to call a function with no parameters in any and all circumstances.


> It's really nice to be able to use abstractions that cost nothing because the compiler is smart.

But the compiler is not smart. It's screwing up in certain cases. In this example if it was smart it would have figured out that the value never was initialized.

> In this particular case, you might have a function pointer that exists for future expansion, but which currently only ever holds one value.

Then define it as a regular function for now. The fact that you only thought of one function that needs it means you're making abstractions before you really needed them. And if you need a second function soon you'll loose the speed of the optimization anyways. And you did profile it first to figure out that this one tiny optimization actually matters, right? :)

But let's say you really needed to do it that way for whatever reason. If the compiler was smart enough to warn you that it wasn't initialized you could have made an empty function and initialized it to that. Problem solved and the compiler would be free to optimize it away.

> In a case like that, it's really nice if the compiler can remove the indirection (and potentially go further and do clever things like inline the callee or do cross-call optimizations).

Sure. Do a full program optimization and figure out that the function to initialize the pointer was actually called. Then do all those clever optimizations. The issue is that the compiler writers want the benefits of the optimization without doing the work making the optimization safe by making the compiler smarter. They just hide behind the "undefined behavior" mantra and let the programmer pick up the pieces when it goes wrong.

> For this particular scenario, the language should encode the nullability of Do as part of the type. If it's non-nullable, then it should require explicit initialization.

This. I 100% agree that this is the proper solution. But it would require a whole program pass to figure out that it's actually initialized somewhere. As I said above, the compiler writers could have done that without a change to the language.

But a lot of UB could be avoided by language changes. That's what many people have done when designing new languages. With C however we're stuck with what we have and need to make the compiler smarter before it slaps every optimization in its tool belt at every piece of code.

Maybe the C language needs to slowly evolve and add those changes to start getting rid of UB. But there has been zero progress in that direction. The compiler writers are perfectly content to squeeze out every last cycle of performance using any new UB loophole they can find.

When safety finally becomes a priority to them over benchmarks then maybe we'll start seeing some progress.


> if it was smart it would have figured out that the value never was initialized.

But that's false, which just goes to show that the compiler writers know way more about this than you do. There's nothing stopping this from being linked into a binary which doesn't even call main, or which calls NeverCalled, etc. And I bet you will also insist stamping your feet that of course programmers should be able to construct function pointers - to functions like, y'know, Never called - from arbitrary bit patterns. You know nothing, but you're convinced you know so much more than those stupid compiler writers.


> You know nothing, but you're convinced you know so much more than those stupid

For this and the other personal attacks you posted below, we've banned your account.

It's unacceptable to conduct yourself like this on Hacker News.


>> if it was smart it would have figured out that the value never was initialized

> But that's false

Are you reading the same code the rest of us are? NeverCalled is never called. So Do is not explicitly initialized and therefore contains a null pointer because it's a static variable.

Now compiler writers wanted their benchmark scores better so instead of crashing the program when Do is called, which happens in the unoptimized version, they decided to play fast and loose with UB. They just made code vanish.

What I'm saying is that if the compiler can figure out that NeverCalled is actually called from somewhere then it's free to make these optimizations. But if it knows it's not called then it should either disable the optimization for that statement or better yet give a warning.

> There's nothing stopping this from being linked into a binary which doesn't even call main, or which calls NeverCalled, etc.

Which is why I called for Whole Program Optimization to solve that issue. Since it looks like you did not bother to find out what that is and how it would solve that issue I'll explain it here. In Whole Program Optimization the compile is pushed down to the link phase. This lets the compiler see the who program and apply optimizations globally instead of at a file by file bases. So it can tell if main is never called or if NeverCalled is called or not.

> And I bet you will also insist stamping your feet

Now you're attacking me instead of my arguments. Do you wish to have a civilized discussion or just resort to insults? Because if it's the latter I will just ignore you in the future.

> which just goes to show that the compiler writers know way more about this than you do

> You know nothing, but you're convinced you know so much more than those stupid compiler writers.

I am a compiler writer so I do know what I'm talking about. It's a small personal project but it means I've been doing a lot of thinking and research about compilers. And eliminating UB is my current design focus.

And if you reread what I wrote you can see I never called them stupid. They are quite smart and know what they are doing. But even a smart person can make bad decisions depending on their motivations. What I'm saying is that they are putting their skill towards exploiting UB instead of protecting programmers from it.


Just wanted to say, I think your comments here are useful. Given some of the replies, I guess the person who said that this is "literally a religious issue" is right. Sigh!


Thanks. I'm glad some people are getting some use from my posts.

I'm used to the "religious" attacks against me as this isn't the first time it's happened. You need to have a thick skin to post the non-mainstream ideas here. It doesn't matter if you are correct or that your idea is technically accurate, it's all about the how popular the other view is.

The funny thing is how consistent the pattern is. First you see the downvotes and upvotes come in. This is the first sign you're on a hot button topic. Then people will simply tell you that you're wrong without any counter argument. Once you respond back with further facts to back up your argument the attacks on your education/skill/knowledge come in. You misused some cargo cult terminology and that's proof you don't know what you're talking about. Usually it ends there but once in a while someone starts up with the personal insults.

It's funny and sad watching the same thing happen over and over. Sigh.


The function called at program startup is named main, which this translation unit defines. No other may therefore define it. Binaries that don't run main are out of the scope of the standard, and so irrelevant to the discussion.

Anyway, as a more general point: your argument is, basically, "the customer is wrong". But the customer is never wrong! Therefore your argument is invalid.


[flagged]


Right, yes, sure, whatever. Since you've evidently got the experience that I apparently lack, you'll know that this point is irrelevant, since the topic at hand is Standard C, and not whatever some random implementation happens to do... so I'm not sure what your point is. But of course perhaps it would be obvious to a more experienced practitioner.

C standard reference: https://port70.net/~nsz/c/c11/n1570.html#5.1.2.2.1

(A freestanding environment may start anywhere - but such environments are unusual.)


Right, the original blog post also points this out.


Unfortunately, the mastery of C/C++ seems to be dodging UB minefield and only operating in "safe" spaces.


> Unfortunately, I'm also sure that we don't have the time to do it.

And that really is the crux of the issue. Despite the significant amount of discussion around the problems of C UB, there is precious little work being done to actually "solve" the problems.

https://blog.regehr.org/archives/1287

> Since we published the Friendly C proposal, people have been asking me how it’s going. This post is a long-winded way of saying that I lost faith in my ability to push the work forward. However, I still think it’s a great idea and that there are people besides me who can make it happen.


I believe creating safer languages like: Go, Rust, Swift are also attempts to solve the problem. It doesn’t need to be C/C++


Agreed on both points. Still I think there would be room for a more C like language in the market. Rust in all its greatness is also very complex, gaining more of a C++ feeling to it. I don't think any of the new languages (including modern C) manage to replicate the feeling of simplicity and transparency of k&r c.


Very interesting, to me seems like a "compiler bug". The compiler should not automatically set the static pointer value if the function that sets it is never called.

Anyway, I guess "undefined behavior" is really undefined and it means anything can happen, so as per specs it's not a bug. Ultimately it's the programmer's mistake for having undefined behavior in his code.


Unfortunately, "it's not a bug, it's a feature!" -- there are long-standing design choices in C/C++ where various circumstances are explicitly designed to yield "undefined" behavior where literally anything goes. I believe the original intent of these are to give the compiler/optimizer more room to speed up the executable.

Edit: The 'undefined' clause here is due to invoking a function at address 0, rather than any lack of variable initialization (since global variables are automatically initialized to 0, as the poster below points out).


Globals are always initialized. If no initial value is specified, they’re initialized to zero.

What’s undefined here is calling a function pointer that contains zero. Thus the compiler assumes that it must have been set before being called, and since there’s only one value it could possibly be set to, it must be that value.


It is a bug, but it's a bug in the spec. Saying that a common mistake like dereferencing the null pointer is undefined and therefore your program can do anything is not useful behavior. The only sane design is for any attempt to dereference the null pointer to cause the program to signal an error somehow. Exactly how that happens can be left unspecified, but that it must happen cannot be unspecified in a sane design. I don't see how any reasonable person could possibly dispute this.


> Exactly how that happens can be left unspecified, but that it must happen cannot be unspecified in a sane design. I don't see how any reasonable person could possibly dispute this.

Your proposal is basically tantamount to saying that the compiler can never ever delete any read or write to memory that is unused if it can't prove that the memory pointer is non-null (for example, the pointer is an argument to the function--clearly a very common case).

Trying to formally specify what can and can't be done with common undefined cases (like dereferencing memory that results in a trap or reading uninitialized values) turns out to run into issues where the specification unintentionally precludes common optimizations, and it's not always clear how to word semantics in such a way to not do that.


> Your proposal is basically tantamount to saying that the compiler can never ever delete any read or write to memory that is unused if it can't prove that the memory pointer is non-null

That's right. I would much prefer to take a small performance hit if I might be dereferencing a null pointer than to literally have anything at all happen in that case. If I really need that last little bit of performance I really do want my compiler to insist that my code be correct before it will give it to me rather than take a wild guess at what I really meant to do and get it wrong.


How big a performance hit? Double run time? Triple? 10x?


I will take any performance hit over the potential for catastrophic failure by default.


Then there is an easy solution: compile with -O0. What's the problem with optimizations being available for those who want them?


There is absolutely no problem with optimizations being available. The problem is that the standard gives the compiler license to muck with the semantics of the program in highly non-intuitive and potentially dangerous ways, and this is true regardless of what flags are passed to the compiler. So I can't depend on anything working even if I specify -O0, at least not by the standard. I am entirely at the mercy of the benevolence of my compiler vendor.

If the compiler is going to be given license to make those kinds of dangerous non-intuitive changes to the program semantics I want that to be evident in the source code. For example, I would like to have to say something like YOU_MAY_ASSUME(NOTNULL(X)) before the compiler can assume that X is a pointer that can be dereferenced without signaling an error if it's null. That way the optimizations are still available, but it is easy to tell by looking at the source code if the author prioritized speed over safety and if there are potential gotchas hiding in the code as a result. The way it is now, the potential gotchas are like hidden land mines, nearly impossible to see, and ready to blow you to bits (pun intended :-) without warning.


> So I can't depend on anything working even if I specify -O0, at least not by the standard. I am entirely at the mercy of the benevolence of my compiler vendor.

What you're basically saying is that you want semantics that's basically defined by the compiler vendor (or, more accurately, compiler/operating system/hardware trio), but you're pissed that the standard leaves it up to the compiler vendor as to whether or not you will get that option. You're already "at the mercy of the benevolence of [your] compiler vendor" to give you things like uint32_t, why is the presence of a -O0 inherently worse?


No, uint32_t has been part of the C standard library since C99.


Yeah, I completely agree that this behavior is not sane. It is designed behavior, perhaps designed so with good intentions, but foolish assumption nonetheless (that the “undefined” behaviors would be predictable if left truly undefined).

So yeah, I’m pretty happy to call this a “design bug” in the entire language. Those kinds of bugs are hard to fix, because you need the whole C++ committee to fix this, and we all know how bad design-by-committee performs.

So just switch to Rust :)


Well, strictly speaking, it's a bug in the example program. If it were fixed to not invoke undefined behaviour, then this unpredictable thing wouldn't happen.


It's (probably) true that it's a bug in the program, but the sane behavior would be for the program to signal an error resulting from an attempt to dereference the null pointer.


How do you know it's never called? You can call the function from another file. It's not possible to know that the function is never called until link time.


No, the function pointer is declared static. It cannot be referred to outside the current file.


The function pointer is static. But the function that mutates it, NeverCalled, is not. One can still indirectly change that function pointer.


> The compiler should not automatically set the static pointer value if the function that sets it is never called.

It doesn't automatically set the pointer value. It only assumes, when reading the pointer, that it can only be NULL or EraseAll (NULL is the initial value, EraseAll is the value set by NeverCalled, since the pointer is visible only to the same C file there are no other possibilities). Then it sees a function call through the pointer, and assumes it cannot be NULL (you can't dereference a NULL pointer, much less call through it). The only value left is EraseAll; since that's the only possibility, it is inlined at the call site. At no moment was the static pointer value modified.


So how do you intentionally generate a core-dump these days?



It's probably undefined behavior, but this crashes nicely for me:

    int (*main)(int, char**) = 0;




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: