I dunno about the GP, but it's a JPL guideline to never use dynamic allocation after initialization. So it's not unthinkable. I'd suspect that many microcontroller programs might have to be really careful about using the heap just because they just don't have the memory to allocate that much.
https://www.perforce.com/blog/kw/NASA-rules-for-developing-s...
It's pretty easy in a lot of embedded applications to basically only have objects that live forever or are allocated on the stack. I usually aim for zero heap at all, and just have statically allocated objects for the 'forever' set (which makes it easier to see what's using memory). If you're careful you can also statically work out worst-case stack usage as well and have a decent guarantee that you won't ever run out of memory. If there are short-lived objects, a memory pool or queue is usually the best option (though at that point you do invite use-after-free type errors and pool exhaustion). I would say with this style it's extremely rare to have memory safety issues, but it's also not really suitable to a lot of applications.
C++ uses value type to mean either a scalar object (int, tuple<double> etc) or a container that manages heap memory for you, e.g. a vector of a value type. If you stay in that world you can basically ignore memory management.
Staying away from std::unique_ptr<T> and std::unique_ptr<T[]> while using std::vector<T> sounds kind of silly. The last one is a generalized version of the first two. So claiming you don't use the first two is really misleading.
I'm not sure how you define "value type" (it certainly isn't C++ terminology; are you coming from C#?) but in any case, this is a distinction without a difference. You can replace every use of std::unique_ptr with std::vector and just switch a few method calls (like using .data() instead of .get()) and you'd achieve the same effects, just slower. I'm not sure what the point would be though, other than to be able to claim that you don't use smart pointers.