I understand this as phenomenon as the new grad and the senior developer are optimizing for different things. The new grad is focused solely on the asymptotic complexity of the code. It doesn't matter how slow or how complicated it is in practice, they are solely focused on using the fastest data structure asymptotically.
The senior developer optimizes a different set of criteria:
1) How hard is it to understand the code and make sure it's correct.
2) How fast the algorithm in practice.
There are several different reasons why the performance of the algorithm in practice is different than the performance in theory. The most obvious reason is big-O notation does not capture lots of details that matter in practice. An L1 cache read and a disk IOP are both treated the same in theory.
A second reason is the implementation of a complex algorithm is more likely to be incorrect. In some cases this leads to bugs which you can find with good testing. In other cases, it leads to a performance degradation that you'll only find if you run a profiler.
I one time saw a case where a function for finding the right shard for a given id was too slow. The code needed to find from a list of id ranges, which one a given id fell into. The implementation would sort the id ranges once ahead of time and then run a binary search of the ranges to find the right shard for the id. One engineer took a look at this, realized that we were doing the shard lookups sequentially, and decided to perform the shard lookups in parallel. This made the code faster, but we still would have needed to double the size of our servers in order to provide enough additional CPU to make the code fast enough.
Another engineer hooked the code up into a profiler and made a surprising discovery. It turns out the implementation of the function was subtlety incorrect and it was sorting the id ranges on every call. This happened because the code sorted the id ranges inside of a Scala mapValues function. It turns out that mapValues does not actually map a function over the values of a hash table. It instead returns an object that when you look up a key, it will look up the value in the original hash table, then apply the function[0]. This results in the function being called on every read.
The solution was to replace mapValues with map. This dramatically improved the performance of the system and basically brought the CPU usage of the system down to zero. Notably, it would have been impossible to discover this issue without either knowing the difference between map and mapValues, or by using a profiler.
The senior developer optimizes a different set of criteria:
There are several different reasons why the performance of the algorithm in practice is different than the performance in theory. The most obvious reason is big-O notation does not capture lots of details that matter in practice. An L1 cache read and a disk IOP are both treated the same in theory.A second reason is the implementation of a complex algorithm is more likely to be incorrect. In some cases this leads to bugs which you can find with good testing. In other cases, it leads to a performance degradation that you'll only find if you run a profiler.
I one time saw a case where a function for finding the right shard for a given id was too slow. The code needed to find from a list of id ranges, which one a given id fell into. The implementation would sort the id ranges once ahead of time and then run a binary search of the ranges to find the right shard for the id. One engineer took a look at this, realized that we were doing the shard lookups sequentially, and decided to perform the shard lookups in parallel. This made the code faster, but we still would have needed to double the size of our servers in order to provide enough additional CPU to make the code fast enough.
Another engineer hooked the code up into a profiler and made a surprising discovery. It turns out the implementation of the function was subtlety incorrect and it was sorting the id ranges on every call. This happened because the code sorted the id ranges inside of a Scala mapValues function. It turns out that mapValues does not actually map a function over the values of a hash table. It instead returns an object that when you look up a key, it will look up the value in the original hash table, then apply the function[0]. This results in the function being called on every read.
The solution was to replace mapValues with map. This dramatically improved the performance of the system and basically brought the CPU usage of the system down to zero. Notably, it would have been impossible to discover this issue without either knowing the difference between map and mapValues, or by using a profiler.
[0] https://blog.bruchez.name/2013/02/mapmap-vs-mapmapvalues.htm...