'Zero cost' is generally meant to mean 'zero runtime cost'. For most workloads, an increase in compile times is okay for an optimized production build. In the rare cases where that's not enough, I find that using precompiled headers and splitting code into translation units helps significantly.
Yet many people never look at other costs, so de facto they're assuming 'insignificant costs for everything' which isn't true.
For example, C++'s template language as originally implemented was technically 'zero cost', but C++ programmers paid for it a lot for a long time, in inscrutable error messages and slow compile times (this was fixed to a large degree with modern implementations and standards).
People didn't understand that they were paying in programmer / compile time / bad error messages? I don't believe that for a second. Those costs are extremely visible. As visible as it is possible to be. They wouldn't get more visible if you dressed them in reflective jackets, slapped a police light bar on top, and turned on a siren to make sure everyone was looking in the right direction. Who undergoes that kind of suffering and doesn't even notice? Nobody.
In contrast, the benefits of Zero Cost Abstraction are quite subtle. "Why would I want that? Ever heard of Moore's law? Caring about perf is so 1990s!" goes the immediate thinking. If you never have to write high-perf code, that reasoning is even correct! Of course, there are still many places where performance does matter, and being able to use high level language features on the very innermost loops, the places that halve or quarter the throughput of your $6000 graphics card(s) if you carelessly toss in even a single call of overhead, is quite something to those in a position to take advantage.
Since the caveats of ZCA are obvious and the benefits are subtle, I think it's perfectly fair to use the term as a way to draw attention to the latter.