Well you may still have an x larger than the largest value representable in your floating point system, and y=1/x. It is not insane to want z=x*y=1 to be approximately true.
At that point you're not multiplying x and y to begin with.
So of course you want to avoid error states but a double already has an exponent range well beyond beyond astronomical. The width of the universe is around 10^27, a planck length is around 10^-35, and the limit of the format is ±10^308.
While I know there are better and more numerically stable ways to implement it, think of the softmax function. It is perfectly possible to have a list of non-astronomical/non-Plancklength numbers, naively softmax it, and die because of precision.
Softmax goes out the other side. Naively tossing around exponentials is so bad that it can easily explode any float, even 128 bit. It only adds another 4 bits to the exponent, after all. Even a 256 bit float has less than 20 bits of exponent! So that's an example where floats in general can go wrong, but it's not an example of where using larger floats is a meaningful help.
Addition has problems where x is close to negative y, sure.