> Good point - however, keep in mind that bug fixes would need to be made in two places.
Only if the bug actually occurs in both implementations. This does happen sometimes, so it's indeed good to keep in mind - but my experience experimenting with these kinds of comparison tests hasn't borne this out as the more common case. YMMV, of course.
> Not only does it mean the maintainer needs to grok two pieces of code, but they may have to determine if the original code is providing bad truths.
I find automated comparison testing of multiple implementations to be extremely helpful in grokking, documenting, and writing more explicit tests of corner cases by way of discovering the ones I've forgotten, frequently making this easier as a whole, even with the doubled function count.
> Hopefully there wouldn't be many places where this type of testing would be required as the low hanging fruit is often enough: however keep in mind certain industries do optimize to ridiculous degrees - and in some of those industries optimization can happen first depending on the developers [well-seasoned] intuition, and can be a frequent habit.
I don't trust anyone who relies on intuition in lieu of constantly profiling and measuring in this sea of constantly changing hardware design - I've yet to see those who've honed worthwhile intuitions drop the habit :). Even in the lower hanging fruit baskets, it's frequently less a matter of optimizing the obvious and more a matter of finding where someone did something silly with O(n^scary)... needles in the haystack and where you'd least expect it.
Only if the bug actually occurs in both implementations. This does happen sometimes, so it's indeed good to keep in mind - but my experience experimenting with these kinds of comparison tests hasn't borne this out as the more common case. YMMV, of course.
> Not only does it mean the maintainer needs to grok two pieces of code, but they may have to determine if the original code is providing bad truths.
I find automated comparison testing of multiple implementations to be extremely helpful in grokking, documenting, and writing more explicit tests of corner cases by way of discovering the ones I've forgotten, frequently making this easier as a whole, even with the doubled function count.
> Hopefully there wouldn't be many places where this type of testing would be required as the low hanging fruit is often enough: however keep in mind certain industries do optimize to ridiculous degrees - and in some of those industries optimization can happen first depending on the developers [well-seasoned] intuition, and can be a frequent habit.
I don't trust anyone who relies on intuition in lieu of constantly profiling and measuring in this sea of constantly changing hardware design - I've yet to see those who've honed worthwhile intuitions drop the habit :). Even in the lower hanging fruit baskets, it's frequently less a matter of optimizing the obvious and more a matter of finding where someone did something silly with O(n^scary)... needles in the haystack and where you'd least expect it.