It's somewhat true for imperative - you still have to keep an eye on your call graph, et cetera.
But in Haskell, say, because of lazy evaluation, it's easier to lose track of how much CPU effort something is going to cost, and when you're going to have to pay for it.
It is more than that. In many of the popular imperative languages, any memory modifications are done by the programmer. Not just memory, but sometimes processing capabilities. When things get hidden behind an abstraction over "map", say, suddenly what you thought was a single function call was actually a crapload more.
This is becoming less true, of course. Java getting lambdas hides a TON of places where you just managed to allocate a ton of memory and/or perform a ton of operations.
And note, I do think this downside can be oversold. So, don't take this as a condemnation of "functional" languages and methods. It definitely exists, though.
Granted, I am almost certainly simply infatuated with what is essentially an "anti immutable" algorithm. http://taeric.github.io/Sudoku.html
But in Haskell, say, because of lazy evaluation, it's easier to lose track of how much CPU effort something is going to cost, and when you're going to have to pay for it.