Hacker News new | past | comments | ask | show | jobs | submit login
GCC 14.1 (gcc.gnu.org)
161 points by P_qRs 6 months ago | hide | past | favorite | 52 comments



It's worth noting that this release breaks a lot of older code, see: https://wiki.gentoo.org/wiki/Modern_C_porting#What_changed.3...

It enables a few -Werror flags by default, refusing to compile code with probable undefined behavior. Unfortunately, a lot of older code subtly raises these warnings (now hard errors), even in cases where it's "fine" (due to valid platform assumptions).


Looking at those flags, several of them are erroring about things that have been deprecated since before I was born. If you're going to complain that it's breaking old code, I have to ask why this code is simultaneously important enough to you to use a modern compiler with modern flags to build it and not important enough to bother changing for 30 years.


Plenty of software broke as a result of these flags, gentoo has a tracker here: https://bugs.gentoo.org/showdependencytree.cgi?id=870412&hid...

As per my own experiences so far, we maintain an older version of GCC for matching GBA decompilation projects, which exhibits compilation errors now.


> things that have been deprecated since before I was born.

You must be very young.

Breaking compilation is not a sign of maturity. See Rust for details.


Rust is 9 years old. By programming language standards, it is the new kid around the block.

C is over 50 years old.

Python is 30.

Haskell 35

Java 30


It's great seeing progress made here. C/C++ will never be truly safe languages, but every little bit the compiler can help with is appreciated, esp by those maintaining established codebases.


CPP2/cppfront:

https://github.com/hsutter/cppfront

I hope we see this in C++26 as optional mode i.e. #safe and #unsafe and same for #impdef or so.


It is only yet another wannabe C++ replacement, regardless how it is sold as not being one.


A wannabe replacement by…Herb Sutter. I dare to differ.


Which is exactly why he pretends to not be a wannabe replacement, as it is politically incorrect to have the chair of ISO C++ to also propose a C++ replacement.

CFront wasn't C, nor Objective-C was C, even though both originally started as macro preprocessors targeting C code generation, and were like Typescript for C.

They became their own thing, after enough user based was won, and nowadays all major C compilers and standard libraries are written in C++, not C.


This is true but by now distros have fixed most of the breakage and sent the changes back upstream (certainly this is true for Fedora).


Well, yeah, for maintained upstreams. I sometimes need to build older versions of software and simply use unmaintained software, for which these changes are a bit of a pain, along with -fno-common before it.


At a certain point, it's probably easier to build and run the unmaintained software in a container running an OS of corresponding vintage. Obviously that has security ramifications, but hopefully this software isn't facing the internet and being in a container at least means you're on a modern kernel and can isolate things at a port and filesystem level.


Debian is super good for this. Most of their apt mirrors are still up and running, so you can easily download and build software from 2002 if you use the appropriate Debian release to do it.

I usually do this first to make sure the software builds and works, then I try to figure out how to get it working on a newer OS.


You're right, but it's also significantly more inconvenient.


it would be nice if they provided a gcc-ancient-compat or something though it's easy enough to create an alias with the correct flags.


-std=gnu98 works. ie explicitly telling it your code is not modern C.


The new flags apply regardless of -std value, as does -fno-common


Uhh no they don't, and it's explicitly documented under https://gcc.gnu.org/gcc-14/porting_to.html in "Turning errors back into warnings".

Futher more every warning Explains why the std value affects them.


It says a lot that this document has to include things like this:

> Only cast if confident it's correct, otherwise investigate more. Casts will silence real problems if incorrectly used.

> Do not assume it is supposed to be an int.

> Do not simply cast to the "other side" of the error. Casts will silence warnings/errors, but that does not mean the cast is correct.

Besides the increasingly well-known footguns of old C code, we're always going to have problems if there isn't an inherent safety culture among the people maintaining open source projects, whether as upstream or a distributor.


Very easily worked around with -Wno


I might be wrong, but it is my understanding that a lot of these warnings deal with undefined or implementation defined behavior and that behavior is changing this release. If that understanding is close, for even some of these, then just suppressing the warnings means the software will behave differently.


Nah, you can override the flags and still will remain working, but you have to be aware of it and figure out how to change the flags of a project properly.


My understanding is that these were all existing warnings that were off by default (you needed to pass -Wall to see) that are now displayed by default; I don't see any indication in the patch thread that there were corresponding behavior changes.


>enables a few -Werror flags by default

Presumably they can then be disabled when needed yes? Sounds like a better default.


Good. Long overdue.


Probably should have made this comment 20-25 years ago but anyway. Always was impressed that with each new release that my application ran that bit faster on the same code, so thank you backend people (SPARC). Lots of fun to bootstrap too and compile the compiler and check it stays the same. Although to be fair bootstrapping Kermit from nothing was a more impressive process.


Is Ada being used in the industry? Looks actually quite fun to play with from changelog additions. I didn’t realise it’s still under active development.


Yes. People usually are surprised that NVIDIA is using it : https://www.adacore.com/nvidia

Ada is assumed to be dead by most US-centric programmers, but is alive and used outside the US. Seems most of the action these days in Ada-land is in Europe.


Not surprisingly, VHDL is also mostly used by European companies.


It's pretty famously used by the US government for safety-critical software.


It was created by the US government, or at least by committee for the US government, specifically the DOD. There was a DOD mandate to use Ada, that lasted about 5 minutes before the rebellion against it won out. There's not a lot of new (this century) software written at the DOD's request in Ada.


That's interesting. For some reason I imagined it's still a requirement now.


Also avionics and air traffic control


And train control systems.


This release carries significant improvements in RISC-V support.


With hopefully more to come! There's a RISE project proposal being kicked around to improve vectorization support.


I didn't realize RISC-V already has so many extensions. I just briefly looked over them and apparently they are often just small parts which get grouped up into larger extensions, but I hope the extension "mess" won't kill risc-v in the end. Still very excited about RISC-V progress


For RISC-V, it's extensions all the way down. The mandatory core is tiny and all the value (outside of minuscule microcontrollers) is in the extensions.

It makes more sense if you think about it like programming libraries. That's the same 'mess', but it's not a problem.

There is often fear of 'fragmentation', but vendors tend to be highly responsive to their target market and competitors for each application and will coalesce around a set of extensions that make sense.

The other thing I'd add is that the one thing worse than having 'yet another extension' is not having the extension you need. Extensions are a response to market demand.


Speaking as someone with zero actual domain knowledge, my concern would be about ensuring that disparate extensions have consistent interfaces and can cooperate with each other. I don't know how that would work at a hardware design level, but the thing to avoid would be whatever is the analogue of the rust/async fiasco, where certain components only really work properly with others within their own "sub" ecosystem.


You may like ... https://research.redhat.com/blog/article/risc-v-extensions-w...

I don't think extensions are a "mess", certainly they're much less of a mess than x86 (a low bar, I know ...)


Good article, although not up to date with happenings within RVI.

Most remarkable is the mention of issues around C. This got extensive discussion. It turned out only Qualcomm has issues, and the rest of companies with large scale implementations are happy with C.

It was proven in these discussions that the issues Qualcomm brought up are specific to their implementation largely reusing the ARM core they obtained from Nuvia acquisition, and easy to avoid with a clean board implementation.

The board ultimately ruled against Qualcomm's proposal[0], and the world moved on. Note that every purchasable chip out there supports C, and this is unlikely to change.

0. https://news.ycombinator.com/item?id=38230463


Software does mostly target Profiles, such as RVA22 (latest ratified application profile). They specify a set of extensions that is required.

Note that all software compiled for RVA20 does still run on RVA22 CPUs. but software built for RVA22 does not (directly, trapping and emulating is possible) run on the older RVA20 CPUs, as they lack the necessary instructions.

This is not unlike e.g.: x86-64v3 vs x86-64v2.

It is only in situations where the implementation is very small and vendor has full control of both the software and the hardware stack that the vendor might benefit from implementing exactly what they want.

CPUs and software stacks for servers, workstations, laptops, smartphones, tablets and such simply stick to profiles.

Windows and Android are expected to require RVA22+V, possibly RVA23, as the baseline.


Has the LTO bug affecting Ceph client been fixed in this? I don't see a mention of this ticket in the changelog, but in the issue itself it sounds like they managed to resolve it: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113359


Upd: searching in the github mirror by the commit hash from the issue, found that https://github.com/gcc-mirror/gcc/commit/1e3312a25a7b34d6e3f... is in fact in the 'releases/gcc-14.1.0' tag.

Even weirder that this one got swept under the changelog rug, it's a pretty major issue.


You submitted this 2 minutes earlier: https://news.ycombinator.com/item?id=40283122


> New function attribute null_terminated_string_arg(PARAM_IDX) for indicating parameters that are expected to be null-terminated strings.

Why is this not a type attribute?


because most of the time the parameter is going to be a plain char*.


They’re asking why it’s not an attribute on the type instead of on the function. I don’t believe your answer explains that unless I’m overlooking some obvious implication of your statement.


For reasons that are too hairy to go into here C doesn't "really" have a string type (yeah, yeah pendants, I know about fixed strings etc.).

It just has arrays of char, which you pinky promise will end in a \0 for things that expect strings.

By declaring that yes, this function really must have that terminating \0 a sufficiently smart compiler can statically analyze some errant use of functions expecting terminated strings. I haven't looked into this new feature, but I assume that's what it's doing.

If you mean why is the "__attribute__" syntax not declaring such a thing adjacent to the function, the answer is that this allows for shoving the . extended syntax into standard C in a mostly backwards compatible way.


No, I’m asking why you can’t apply this variable on types within a function to verify that for example:

    __attribute__((null_terminated)) const char* x = strcpy(maybe_not_null_terminated, “some string”);
Is a warning. Similarly, if it’s a type attribute then you’d apply it on the argument itself instead of needing to specify it on the function + an IDX parameter:

    void do_something(__attribute__((null_terminated)) char* f);
    
The newly introduced strub attribute works this way, so it’s unclear why a function attribute was chosen for this attribute instead of a type attribute.


Oh, I see you want to annotate the parameter in the type position, so the annotation act as some sort of qualifier.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: