Hacker News new | past | comments | ask | show | jobs | submit login

The rule is memcpy(x, y, n) is undefined if x or y are null. Making it defined if n is zero is a special case inside that rule.



Sure: except that rule isn't useful. You can't _rely_ on undefined behavior, so preserving the rule that "memcpy(x, y, n) is undefined if x or y are null" isn't helpful. OTOH, preserving the rule that memcpy(x,y,n) is defined for all x,y when n==0 really is useful.


It is useful. I'm focusing on the effect on code generation: the compiler relies on that rule to deduce that x and y are not null. The special case for n==0 makes it harder for the compiler to deduce the non-null-ness and thus harder for it to optimise based on that. Restricting when optimisations can be applied is the effect of your special case.

You might not agree with this as a design decision but I'm not arguing that this is the right trade-off, just answering:

> I don't think your optimization actually exists. In what possible world can we generate better code by assuming that for memcpy(x,y,n), when n==0, x!=nullptr||y!=nullptr?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: