Unit tests rarely cover high-load concurrency scenarios.
I used to think unit test coverage was important, but I've found using the type system to make errors impossible is a much more effective way of lowering the defect rate.
Given the project management triangle they kind of are - the goal should be to achieve the required defect rate at minimum maintenance cost. A lot of people talk as though more tests or more verification is always a good thing, but zero defects is usually not a business priority if it means slowing down feature work. So adding more different kinds of verification should be a last resort, only when you actually require a lower defect rate than you can achieve with one method alone - which is pretty rare IME.
Much as I enjoy writing high quality software, it's irresponsible to spend time or money producing something higher quality than is justified by the needs of the business. Generally I find that unit tests for anything other than actual logic (which is a tiny fraction of most codebases) don't offer enough benefit to be worth the costs.
(I'd also argue that unit testing can reduce the quality of the software as it makes the internals of the architecture more costly to change).
> Much as I enjoy writing high quality software, it's irresponsible to spend time or money producing something higher quality than is justified by the needs of the business.
I think it's more irresponsible to produce something that the customer isn't going to be able to use for many years or that needs a patch before it's even released.
The responsible approach is the one that delivers the best cost/benefit for the customer. Yes, it doesn't mean producing something that's completely unusable - but it rarely means producing something completely bug-free either.