Hacker News new | past | comments | ask | show | jobs | submit login

> If you wanted the fastest possible performance, you'd try each algorithm, profile them, then select the one that works best. This build process can do that automatically. GNU autoconf cannot.

Except doing it at build time is a terrible idea anyway. That is because the set of CPUs it will be used on is actually unknown, unless it's literally not meant for anyone else to use that compiled object. But that's not how people actually develop at all. They distribute the software objects and users link against it on their CPUs, which the original build system cannot possibly have knowledge of.

Requiring AVX2 or whatever by default really has nothing to do with this. High speed software that's actually usable for developers and users selects appropriate algorithms at runtime based on the characteristics of the actual machine they run on (for example, Ryzen vs Skylake, which have different throughput and cycle characteristics.) This is the only meaningful way to do it unless you literally only care about ever deploying to one machine, or you just don't give a shit about usability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: