Screen-space reflections are faster and require no new scene draws, but they won't show you "new angles" on reflected objects and they won't go around corners. Planar reflections require a new scene draw, but that's actually pretty modest: non-planar reflections require six scene-draws for a cubemap or full on raytracing.
In general, lots of things in games require re-rendering the scene multiple times per frame, so game engines are really, really good at this. Shadows, projections, transparency / translucency, motion/depth blur, indirect lighting, deferred lighting, even things like hit testing (though for dependency + latency reasons that's a prime candidate for offloading to the CPU). Anything that requires less visual fidelity will have its performance tuned accordingly: fewer pixels, simpler shaders, lower LOD geometry, lower framerates. Nobody will notice if the fuzzy bounce lights are calculated once every five frames and blended.
Ray tracing is conceptually simpler but (so far) slower at any given level of cleverness -- you'll often hear that it has asymptotic benefits, but those same benefits can be pulled out of rasterizers with effort comparable to what is required to make the raytracing practical.
Sure you can, people do it all the time, but I'll grant you that it kinda sucks (which is how I notice that people do it all the time). There are situations where everything you want to reflect is already on screen and the angular difference between the camera and reflected camera isn't enough to be noticeable. When those conditions are met, SSR is awfully tempting. Water is the usual suspect, but wall-mounted mirrors can work too if the camera will only see it from glancing angles and it's reflecting a wide-open space.
In blender eevee render that's famous for screen-space magic you can get realtime mirrors with "reflection plane" "light probe" by placing it over the surface that's supposed to be a mirror.
Look up “instancing” with regards to geometry on GPUs. The mesh can just be rendered twice with different transforms.
That said, it often makes sense to drop some of the decorative flourishes used on the primary instance of the model due to distance, or use a different level of detail. There’s really no great way to get around needing to tell the GPU to render the triangles twice (short of rendering the view to an intermediate buffer and using that, which is still twice…)