Hacker News new | past | comments | ask | show | jobs | submit login
Don't use -Werror (schmorp.de)
55 points by ingve on July 24, 2016 | hide | past | favorite | 69 comments



> Enabling -Werror in distributions virtually guarantees that they won't build: Even if you took care of all compiler warnings in all compilers for which you enable this switch, it's just a question of time till a new compiler version comes along that has a new warning message and - BOOM.

This is a feature. This triggers a review of the new warnings introduced by the new compiler version, to see if they have found bugs, or should be squelched per false positive, or should be squelched project-wide.

> Two recent examples are squid and qemu-linaro, two packages I had to compile from source as Debian GNU/Linux didn't have (good enough, or any) packages for them.

The fundamental problem here is using a contributor's build environment (where I want -Werror to get them to contribute warning-induced bugfixes, or squelches/ignores if they're really false positives) for a non-contributor's task (where they just want the final binary, and aren't going to bother reporting the warnings even if I want them.)

Non-contributors building from source isn't a relevant niche to me (I generally ship binaries), but I can sympathize with the request for a warnings-as-warnings fast build path for them.

But make -Werror the default build path, for developers. This needs to be opt-out, not opt-in.


> This is a feature. This triggers a review of the new warnings introduced by the new compiler version, to see if they have found bugs, or should be squelched per false positive, or should be squelched project-wide.

The thing is, the software already had these bugs, and worked well enough for most people despite them, its not worth breaking the build over unless you are the developer.


Those bugs could be buffer overruns or similar security problems. Just because the program appears to work doesn't mean it should be used.


They could be, or they could be nothing, and the program might not even be exposed to untrusted input anyway.

We can have a discussion about perhaps splitting programs into security classes, though I would prefer the upstream people use the cutting edge security analysis, then backport patches with security advisories.


The post agrees your comment:

"Now, some people (and I feel this group is growing) are going a step too far, by enabling -Werror __unconditionally__ in their software packages. And what applies to me - not being able to anticipate warnings in other compilers - of course applies to them as well"

"squid requires me to __doctor around in its configure__ script, and even __patch some Makefiles__, while qemu-linaro has a central location for it."

The complaint boils down to "-Werror is a development tool, so use it like that".


This all depends on how much your compiler changes and how often you upgrade the compiler. Do you fix all the new warnings before upgrading? If it's a large codebase, that can take a while. Meanwhile your continuous build can't use the new compiler. (Or in practice, that's when you stop using something like -Werror and start disabling the new warnings.)

For most open source projects, it's probably not that big a deal. But for something large like a monorepo, we're talking about hundreds of millions of lines of code that worked before.

People should be able to use the new compiler while someone goes back and looks at the new warnings. Assuming it's a priority. This task isn't automatically more important than other work.


> But for something large like a monorepo, we're talking about hundreds of millions of lines of code that worked before.

Unless you're also monobuild (a nightmare at 100 MLOC no matter how you slice it, although I've heard enough horror stories that I bet such exists), you don't need to upgrade every project at the same time. And if you can't afford the dev time to fix that many warnings, you probably can't afford the QA time to verify the compiler upgrade didn't introduce any "new" bugs from e.g. previously benign undefined behavior that the optimizer can take advantage of now in new optimizations. Although this presupposes you've got decent control over pinning your compiler versions.

My experience has been that compiler upgrades generally don't introduce too many new warnings - I'll generally fix all the new warnings before switching over the CI servers, yes, although I've only dealt with <1MLOC or so at a time. New compilers may introduce too many warnings to fix all at once, in which case I may temporarily squelch or downgrade the specific "lower priority" warning types (read: those that seem to be mostly false positives) and establish corresponding work items for going through the codebase to fix those individual warnings and unsquelching the warning type, as a means to unblock whatever high priority thing is requiring the new compiler in the first place.

...in other words, it may be sufficient in the short term to review the new types of warnings, which is mostly independent of codebase size, if only to create a backlog to prioritize and triage. And hey, maybe each new warning type is useless, in which case you've found a bunch of warnings to disable so they don't distract your fellow developers.


It might be okay for your project, but there are many cases where warnings seriously get in the way (e.g., indirect throw/longjump but leaving out a return statement).

Besides, the warnings depend on the compiler (which may be non-gcc/clang/vs) — and may not even make sense on a given system. So shipping with -Werror is not really a good idea. Put the important stuff in the tests.


> It might be okay for your project, but there are many cases where warnings seriously get in the way (e.g., indirect throw/longjump but leaving out a return statement).

I'd prefer to explicitly annotate the indirectly throwing function as non-returning, when possible. Longjumps are rare enough for me I'd be willing to individually suppress them, or suppress them over a range of code where they happen to be common. I've seen return warnings catch enough bugs that I believe this will save more time than it spends in the long run. If I didn't, I'd explicitly lower that individual warning's severity. I will also lower the warning's severity on a given compiler, if (but only if) I can't properly annotate it away on a given compiler.

> Besides, the warnings depend on the compiler (which may be non-gcc/clang/vs)

Again, feature. More coverage. Good.

> and may not even make sense on a given system.

In which case, upon reviewing said warning and determining said warning indeed does not make sense, said warning can be individually suppressed as mentioned.

> Put the important stuff in the tests.

This is a statically typed language. Successfully compiling is the first test. Successfully passing static analysis is the second test.

Undefined behavior bugs can lead to some very subtle behavior that can be very difficult to unit test for, and tends to be quite sensitive to things like compiler flags, which call site they were inlined at, allocation patterns, etc. to even manifest in the first place. Given how nasty they can be to debug, they're also some of the bugs I'm most interested in catching - ideally before I commit, definitely before I ship. Taking full advantage of compiler diagnostics is way easier than e.g. writing enough unit tests to catch all UB caused from falling off the end of a non-void function without a return statement.


The time you're spending reviewing warnings, determining warnings don't make sense, and individually suppressing warnings sounds like it could be considerable. Worse is that it's never ending - each new gcc/clang (or change from gcc to clang etc.) could bring hundreds of new errors…

Otoh, gcc and clang now support colorized output. Warnings stand out a lot if you're building with make, and can be noted and disabled or fixed (if it's an actual problem) ${WHENEVER}, without having to drop everything now.

I wonder if the argument for/against -Werror is really the difference between the beliefs "a compiler warning is probably a problem in my code" vs "a compiler warning is probably a problem in the compiler"


> The time you're spending reviewing warnings, determining warnings don't make sense, and individually suppressing warnings sounds like it could be considerable. Worse is that it's never ending - each new gcc/clang (or change from gcc to clang etc.) could bring hundreds of new errors…

Writing good code is time consuming. Remember warnings are potential bugs in your code, they should be investigated. Also if you start a project with -Werror you won't introduce lots of warnings at a single time.

> Otoh, gcc and clang now support colorized output. Warnings stand out a lot if you're building with make, and can be noted and disabled or fixed (if it's an actual problem) ${WHENEVER}, without having to drop everything now.

If you ignore warnings that are benign, over time there will be more and more warnings and it becomes harder and harder to notice new warnings that are bugs.

> I wonder if the argument for/against -Werror is really the difference between the beliefs "a compiler warning is probably a problem in my code" vs "a compiler warning is probably a problem in the compiler"

Compilers are remarkable pieces of software, in the last 4 years or so I can remember 3 compiler bugs I found in Clang and GCC and all were segfaults not incorrect warnings. That's after running Clang and GCC thousands of times on hundreds of projects. So as a rule of thumb if the compiler reports a warning it's correct.

Here's one of the compiler bug reports, I can't find the others:

http://lists.llvm.org/pipermail/llvm-bugs/2015-February/0387...


> If you ignore warnings that are benign, over time there will be more and more warnings and it becomes harder and harder to notice new warnings that are bugs

I -Wno-XYZ the common ones that I don't feel are issues

> > I wonder if the argument for/against -Werror is really the difference between the beliefs "a compiler warning is probably a problem in my code" vs "a compiler warning is probably a problem in the compiler"

> Compilers are remarkable pieces of software, in the last 4 years or so I can remember 3 compiler bugs I found in Clang and GCC and all were segfaults not incorrect warnings. That's after running Clang and GCC thousands of times on hundreds of projects. So as a rule of thumb if the compiler reports a warning it's correct.

What I mean is that, for example, gcc warning me about "if(x = y)" is a problem in gcc, as it's generating superfluous output for my correct, intended input


> What I mean is that, for example, gcc warning me about "if(x = y)" is a problem in gcc, as it's generating superfluous output for my correct, intended input

FWIW, it's now automatic for me to write if((x = y)) if that's actually what I meant. Or if (type x = y).


> The time you're spending reviewing warnings, determining warnings don't make sense, and individually suppressing warnings sounds like it could be considerable.

It can be. The thing is, I still spend more time debugging than writing code. The question is: Does this category of warning save me more time in bugs prevented that I don't need to debug, than it takes to review and suppress false positives? If not, I can disable the entire warning category. But otherwise - the considerable cost saves an even more considerable cost in debugging!

Now, the time reviewing warning categories also takes some time, but it's been absolutely worth it in my experience. Even warnings which don't make sense to leave enabled globally can be useful - there are times when they make sense to force-enable locally.

Clang can generate warnings about these two structures:

  struct foo { char c; /* 3 bytes implicit padding */ int i; };
  struct bar { int i, j, k; /* possibly 4 bytes of implicit padding */ };
Totally worthless for 99% of my code. But if these structures need to be exactly the same memory layout between multiple compilers with different implicit padding rules - because they're memory mapped, or serialized as a simple char[] blob, or whatever else, it's a very useful warning to enable around the struct definitions! (I'll combine this with static asserts about structure sizes to seal the deal. No, just because they're the same size doesn't mean they were padded the same between compilers!)

I've spent weeks tracking down issues that eventually turned out to be serialization mismatch related. Yes, it can be that subtle. I assume I saved at least a week between the multiple times I or a coworker triggered the warning-as-error modifying the structure. It took maybe a minute to look up the right warning, and to wrap the code in the relevant pragmas to force-enable it just for those structures. Worth!

> I wonder if the argument for/against -Werror is really the difference between the beliefs "a compiler warning is probably a problem in my code" vs "a compiler warning is probably a problem in the compiler"

For me it's "I want to review every warning type, and decide how to handle it".

Some of them are probably a problem in my code. These should remain errors - I need to fix them. Some of them are probably a false positive, but catch big issues that make it worthwhile anyways. These should remain errors - I can suppress them. Some of them are probably a false positive, and don't catch big issues, and just waste my time. These shouldn't even be warnings - I can disable them.


> indirect throw/longjump but leaving out a return statement

Hopefully a modern language would have a divergence type.


Sure, but do you opt-in or opt-out of being a developer versus just someone trying to build a package?


I always argued compiling/looking for errors are two different things and should be two different tools. Developers need linters/warnings.

Packagers don't need this information unless it will stop the software from functioning as it has previously been functioning (prexisting bugs included).


> There is a common complaint about my software - some compilers sometimes emit warnings when compiling some code, which some people sometimes find somewhat offensive. They can be very useful though, which is why I investigate the ones I know of.

If you don't take care of every warning you may miss a new warning amongst all the old warnings. That's why -Werror is nice, it forces you to deal with every warning as they pop up. Also when I compile someone else's software and there is a bunch of warnings popping up it makes me question the quality of the codebase. The code quality may be fine but it doesn't look good to the outside world.

> Even if you took care of all compiler warnings in all compilers for which you enable this switch, it's just a question of time till a new compiler version comes along that has a new warning message and - BOOM.

Great if a new warning pops up with a different compiler I want to know about that. It's either a bug in my code or a false positive. Using -Werror is more work but it improves code quality.


Summary:

Enable -Werror in debug builds, disable it in production builds (in publicly released builds).

Reason: -Werror is helpful for you, useless for a 3rd party.


Yes exactly. For example GNU coreutils auto enables -Werror for devs when running from a git repo, whereas tarball releases do not add that option. See:

http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=blob;f=confi...


I agree whole heartedly. When I'm programming I typically want most things that would cause a warning to stop me in my tracks. However I cannot count the number of times I've been trying to build someone else's code and the configuration uses -Werror and things that didn't raise a warning on their version of gcc or clang do on mine and now I've got to fix possibly dozens of not actual errors.


Came here to say exactly this. No reason you can't be extra careful during development and turn the flags off in release configuration.


It's not scalable for you either. If everything becomes an error, you can't go a single day without fixing it and everything the compiler + your dependencies - aka, just some other people you've never met - complains about.

In particular if you update a library and they deprecate some functions, -Wdeprecated will start emitting some warnings, which are now errors. Do you think you can fix that in a day? It could take months to fix something that's literally not a problem yet.

By the way, I couldn't build MariaDB on this one machine because some plugin we don't use has -Werror set and fails with obscure C++ nonsense.


> In particular if you update a library and they deprecate some functions, -Wdeprecated will start emitting some warnings, which are now errors. Do you think you can fix that in a day? It could take months to fix something that's literally not a problem yet.

It doesn't take months to add -Wno-deprecated to a Makefile.


> It doesn't take months to add -Wno-deprecated to a Makefile.

After which you no longer have any visibility on what deprecated fxns you're calling, which means one day updating that library again is going to completely block you until you rework your interface.

-Wno-error=deprecated would probably be better.


Even better in my experience: A work item to be triaged, generated when you add either.


Better if the third parties hit you up with the bug report and you come back with a patch.


A warning doesn't stop them from filing a bug report.

It doesn't have to be an error though.


Sure it does. 99% of the time it _is_ an error and will lead to unforseen behavior.


I use -W4 and -WX with MSVC to enable the highest warning level and flag all warnings as errors. Then I disable a couple of the more pointless warnings (unreferenced formal parameter, unreferenced local function, unreachable code...) that would otherwise keep me from being able to use -W4 without jumping through arbitrary hoops.

The problem with GCC, at least at the time I was using it several years ago, is that there seemed to be no way to disable individual warnings. Is that still true (or was it ever?)


GCC has pretty much always had `-Wno-YOUR_SPECIFIC_ERROR`.

By that, I mean it was present in a random version from 1993 (2.4.5). But it wasn't there in 1987 (1.35, the oldest version that can be easily downloaded), so it didn't literally always have it.

Edit: I wasn't satisfied not knowing; so I dug through some more versions. The -Wno- flags were introduced in GCC 2.0 (1992). (Though only tracked down 1.41, not 1.42, the last of the 1.X line).


-Wno-specific-warning (I think it's always had this, or at least for a very long time).


And:

  #pragma GCC diagnostic push
  #pragma GCC diagnostic warning "-Wspecific-warning"
  #pragma GCC diagnostic ignored "-Wspecific-warning"
  ...
  #pragma GCC diagnostic pop
For warnings you want to disable in a smaller scope. s/GCC/clang/ for clang. Bonus points:

  #define MM_WARNING_IGNORE_GCC(x)   MM_IF_GCC(   _Pragma(GCC   diagnostic ignored x) )
  #define MM_WARNING_IGNORE_CLANG(x) MM_IF_CLANG( _Pragma(clang diagnostic ignored x) )
  #define MM_WARNING_IGNORE_MSVC(x)  MM_IF_MSVC(  __pragma(warning(disable: x))       )
  #define MM_WARNING_PUSH() \
    IF_GCC(   _Pragma(GCC diagnostic push)   ) \
    IF_CLANG( _Pragma(clang diagnostic push) ) \
    IF_MSVC(  __pragma(warning(push))        )
  #define MM_WARNING_POP() \
    IF_GCC(   _Pragma(GCC diagnostic pop)   ) \
    IF_CLANG( _Pragma(clang diagnostic pop) ) \
    IF_MSVC(  __pragma(warning(pop))        )

  MM_WARNING_PUSH()
  MM_WARNING_IGNORE_GCC("-Wspecific-warning")
  MM_WARNING_IGNORE_CLANG("-Wspecific-warning")
  MM_WARNING_IGNORE_MSVC(1234)
  ...
  MM_WARNING_POP()
Defining MM_IF_*() is left as an exercise to the reader.


clang understands #pragma GCC diagnostic fine, no need to have special code for it.


Hmm. I could swear this wasn't the case in one of the build environments I ran into (possibly an older version of clang or something? or perhaps I'm thinking of some other non-diagnostic pragma...)


For fun case of "I get an error but you don't", here's a bug: https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/... - around 5 years of GCC versions which ignore the explicit `(void) write(...)` and still report an error.


This article is basically a corollary of the Robustness Principal, "Be conservative in what you do, be liberal in what you accept from others". In this case we're accepting certain dependencies from others and we SHOULD be liberal/tolerant of dependencies that work but generate warnings.


There are some legitimate reasons for leaving -Werror enabled all the time. In the embedded world, for example, I'm less interested in supporting a wide variety of compilers, and more interested in rigidly complying with my particular toolchain's definition of "correct." In that case, I'm going to ask it to be as pedantic as possible because I'm never going to compile the code with any other toolchain, and I'd like to minimise the possibility of a bug because the code is going somewhere that a human won't be able to follow.


Go has the equivalent of -Werror on all the time. How does it address the issue of "your compiler might emit a warning that my compiler didn't?"


Go has something of an imposed rigidity that a lot of C programmers I think wouldn't tolerate well (culturally, viscerally),- one example: the whole comment out all unused variables because you commented out a function for debugging purposes during development (couldn't just pass a flag to the compiler, at least last time I used Go).

Aside - I seem to remember a Blog post you wrote on your experience with the Go language and I think it pretty accurately reflected a lot of my own experience with it.

edit: found it [0]

[0] http://ridiculousfish.com/blog/posts/go_bloviations.html


Commenting out code correctly, in a way that avoids complaints about unused code, should be possible to do in a proper IDE.


I upvoted this (apparently it's been downvoted again) -- for the record I totally disagree with the premise but I think it's a serious enough point to merit being addressed rather than down-voted[0].

> Commenting out code correctly, in a way that avoids complaints about unused code, should be possible to do in a proper IDE.

First of all, failing to compile, in an of itself should be reserved to a particular class of failures not to linting, I feel pretty strongly about this and I think a lot of other people do.

Further, I would say, that the complexity of an IDE that supports this - e.g. suppose you previously declared `uin64_t a, b, c, d;` and someone commented out the only function which uses c ? The editor should than make a new line with `//uint64_t c;` and than contract the old line correctly. That isn't a simple procedure to be sure and it's perfectly reasonable to use nano/vi to write code (you might say it's not effective) but at least in my opinion programtically correct code shouldn't fail because I'm not using super-advanced-ide-X.

[0] Aside - I tend to dislike this down-vote on disagree trend lately, imho the discussion would benefit from respecting and responding to disagreeable opinions and downvotes should be reserved for where (exclusively, but I guess subjectively - the comment isn't/worth the time effort to disagree or respond to e.g. blatant spam, highly condescending, personal attacks etc.)


Go mentality says that you shouldn't need an IDE for coding in a language.


Next time I write a Go program, I'm hacking the compiler locally so it stops whining about this "problem".


Go only errors on a small number of things and, while some of those things would be warnings in other languages, I don't think it's fair to frame them in that light.

The go compiler strives to not print any output at all pretty much, and it does that by not having any warnings at all.

If you want warnings in go, you use tools like "go vet" which tell you "this bit is indicative of an issue".. the go compiler won't give that to you though.

As other replies said, Go also doesn't have many compilers. Really it's only 1 supported one. They also don't add new "errors" to the compiler over time for this compatibility reason pretty much.


There are only 2 go compilers, with very few warnings. C has a whole bunch of compilers and some of them have a whole bunch of warnings.


There's only 2 go compilers, there's like 20 c compilers


even slightly different versions of gcc do emit very different warnings. its not very predictable.


Have you ever used multiple compilers? Try compiling the same thing written in C or C++ with: clang, then gcc, then icc and see how -Werror treats you within any remotely interesting code. It's hard to predict what one compiler feels is perfectly okay (or perhaps ambiguous) and another feels is downright wrong. It's downright infuriating in a cross-architecture environment. -Wall -Wpedantic and -Wextra are usually plenty.


> It's downright infuriating in a cross-architecture environment

The only thing more infuriating than being unable to predict when your code is going to fail the build because of -Werror on some other compiler/arch, is your bugs generating warnings on some compiler/arch es that don't fail the build. Cross-architecture environments are one of the main reasons I want to enable -Werror and kin. I won't notice all the warnings otherwise.


I see the issue as: there are warnings and warnings

One thing is for the compiler to warn about something like implicit conversion of one type to another.

Another, very different, if warning for things that are correct but might be in error (like 'for' without brackets, etc)


Non idiomatic code is an error. Not because computers have an issue, but because it wastes fractions of a second every time someone reads the code.

PS: It's the same reason you capitalize the first letter in a sentence.


1 - That's the job of a linter

2 - automatically generated code has no obligation to be idiomatic

3 - If I had a cent for every time "idiomatic" code got in the way of readability I'd be a millionaire right now


If writing idiomatic code is causing readability problems your doing it wrong. Successful code is read far more than it's written and even machine generated code needs to be well written to locate bugs etc.

iF YoR CodE LoOks lIKe tHIs, yOu FaIl.


> If writing idiomatic code is causing readability problems your doing it wrong.

Sure, because breaking a 82 char line in python in 2 lines (because pep8) is much more readable than leaving it at 82 chars.


A 65+ character line is already ugly. 80 is an arbitrary but sensible cutoff.


Speak for yourself

a = "a long message with 70 chars..."

vs

a = ("a long message..."

     " with 70 chars...")


You should not be using long messages in code, for one thing you can't localize them across multiple languages. You can do:

  A = \
  "A really long string"
But, the odds are really good you are making a mistake.


Readability is still bad in your example, and not all projects require localization (not to mention usage of \ is not encouraged)


To generalize: Don't put debugging shit in production code.


In general don't listen to "Don't use, or use only ...." advice, there is often the case for when you may need it, no need to get religious over it.


Why can't developers, y'know, ACTUALLY READ THE BLOODY WARNINGS? And then, y'know, they could FIX MORE THAN THE FIRST ONE after a single build pass?


Please don't.

We detached this subthread from https://news.ycombinator.com/item?id=12155394 and marked it off-topic.


In my observation, the problem is recompilation avoidance. If you miss it the first time around (maybe you're just trying to get the thing to run) they won't come back unless you do a full rebuild. With a little care you can build a build system that will print them out every time you run, but it takes some care.


Why avoid recompiling? Why this reluctance to do a full rebuild?

I started programming in a world where you had to wait a day to get 300 lines of Fortran built. Now I routinely build the whole 44Gb of community ports of a major Linux distro in three days on a three hundred buck box at home.

There are 24 hours in a day, and your full rebuild will go just fine while you sleep, so long as you DON'T use -Werror.


I don't want to eat a 24h overnight full rebuild every time I fix a single typo. I do want to verify it still builds (i.e. I haven't missed one of the use cases of a renamed variable.)

The shorter the feedback loop, the less context I have to rebuild for errors, the faster I can fix the error, the more efficient I am.

I'm not so far along that I make much use of the red squiggly lines generated by my IDE to highlight syntax errors before I even hit save - I use too many languages that can't be adequately parsed that fast for them to be terribly accurate - but it's a sign of just how much people want to shorten that feedback loop.

Have the CI server do full rebuilds overnight? Sure. Although I still have to bug my coworkers to pay enough attention to the CI server to even notice it's gone red from their changes, at times. Convincing them to read through the warning logs is a nonstarter.


That's a good idea for a makefile hack - somehow force files with warnings to rebuild every time. You'll see them, and also get motivated to fix the warnings.


> Why can't developers, y'know, ACTUALLY READ THE BLOODY WARNINGS?

Same reason checklists are handy: People automatically optimize away "dead code" (nevermind the fact that reading the warnings is sometimes useful.)

> And then, y'know, they could FIX MORE THAN THE FIRST ONE after a single build pass?

I do get multiple errors with -Werror and kin, FWIW. But again we run into human nature - C++ errors get unreliable after the first one, so people tend to optimize away the "useless" step of reading the second error...


So, clever bloke declares error checking is wrong or something.

As a non serious programmer it would seem to me that -Werror is either not granular enough or simply not fit for purpose.

I wonder if -Werror is defined in a standard somewhere and hence has a clearly defined meaning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: