But wrong is almost always odd, so the normal expectation is that you understand the oddity on your system. Otherwise you are blind to problems.
Anyway, it's also not common for an application to have "reduce total CPU usage" as a goal. Available CPU on the database is way more valuable than on the application server, and so it makes sense to trade them up.
In most SQL cases your limiting factor will be your network. So you want to minimize what is coming over (usually). However, that tradeoff is time. If it takes more time for your client to render than it would just to send it you usually let the server do it. For example if I have a 50 row lookup with 10 columns table that is joined to a 300k row table, that for some reason must be returned to the client. Now given those conditions. It might be faster to glue it back together on the client. It just depends on how much is in those 50 rows. Just bunch of ints (probably not). A few 400 byte strings? That could be interesting and faster to do on the client. Like he said 'it depends'.
But wrong is almost always odd, so the normal expectation is that you understand the oddity on your system. Otherwise you are blind to problems.
Anyway, it's also not common for an application to have "reduce total CPU usage" as a goal. Available CPU on the database is way more valuable than on the application server, and so it makes sense to trade them up.