That's not exactly what marktt was asking. For example, calling sleep (explicitly or implicitly while waiting on network input) would not incur much CPU cost under marktt's accounting method but does incur wall clock charge under Lambda.
(It's totally understandable and fair that it does; it's just different than what he asked.)
One could think of Lambda as charging for (A * network bandwidth + B * RAM usage * wall clock time + C * CPU time + D * Disk IO) where C and D are both zero.
From a capacity planning perspective, your goal should be 100% cpu utilization per box. Allocating excessive, unutilized wall clock time is wasted capacity. Same applies for ram/heap per call. These are the same considerations that played out in client/server vs mainframes for so many years.
Mainframe budgeting of cycles, memory, an IO was highly effective and efficiently utilizing resources. It's a model of computation that has disappeared, but is still relevant. When google app engine first came I had hoped it would utilize this model, but instead went the containerization route.
(It's totally understandable and fair that it does; it's just different than what he asked.)