Hacker News new | past | comments | ask | show | jobs | submit login

Nr (2) is not correct. Compilers have been inlining functions for decades so the argument of "all functions to have different addresses" does not hold. Compiler can even _merge_ two different but similarly looking functions into a single one. And that's just part of the code transformations compilers can do.

And this goes without even mentioning LTO, which has been explicitly designed for optimizing the code layout and size during the linking phase.

So, code deduplication is a real thing and happens regularly with your code. Templates are no much different than the non-templated code written for the same purpose.




If you take the address of different functions, the standard requires those to compare unequal. However if you merely call functions and never take their address, the standard doesn't put any requirements on that, so inlining etc. is legal.

But without LTO, it's impossible for the compiler to know whether a non-inline function with external linkage will be called or have its address taken, so the compiler must ensure that it has a unique address.


There are linker level optimizations aside outside of LTO that can merge functions. gold's --icf option comes to mind, with --icf=safe meant to be conforming.


All that sounds ... reasonable? What I didn't agree with is the "requiring all functions to have different addresses" argument. It's invalidated both by the compiler (e.g. inlining) and linker (e.g. code folding).


Good example of an Observer Effect in programming! If you look at fn addresses they do one thing, and if you don’t they do another. :P


If it can prove that nothing can observe the address of the function(s). Inlining renders the whole discussion moot.

The point stands. Compilers cannot merge equivalent functions in cases when it makes sense to even think about this optimization, which is when it actually has to export an externally visible symbol for the function.


And that's what happens in probably like 99% of the cases. I objdump the code quite often to understand what happens under the hood and rarely I see that the similar code has been duplicated. There could be of course examples but I just didn't agree with "standard ... actively sabotage deduplication" sentiment which to me reads as something universal and not an exception.


I think active sabotage is a correct assessment when a simple, obvious optimization is explicitly prohibited for no (defensible) reason and can only be applied by extensive whole-program analysis that allows the nonsensical rule to be bypassed completely. It's still sabotage, it's just mitigated by the extremely smart compilers we have nowadays that basically pick the program apart and rewrite a functionally equivalent one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: