MPFR is sensitive in exactly the same way. Just because you can arbitrarily increase the precision of results with that library doesn't mean that it isn't still sensitive to roundoff error.
In finite precision arithmetic "multiplying by pi or e" makes no sense. And all arithmetic is finite precision. You could multiply by an approximation, and accordingly adjust the precision to avoid rounding.
In more general context, you could choose e as the base of your number system and then multiplying by e is a trivial shift. Then, however, multiplying by 2 would make arithmetically no sense. The natural question that arises is whether it's possible to represent all computable numbers non-lossy in one system. The answer is yes.
Besides that, I think you should re-evaluate your decision-making for votes.
> In finite precision arithmetic "multiplying by pi or e" makes no sense. And all arithmetic is finite precision.
Yes. that's kind of my point: Either you have a finite precision cutoff and lose accuracy, or the memory used for your arbitrary precision representation blows up to incredibly large values (transcendentals will do this to you quickly), or a combination of the two.