It depends on what you do with it and how much bandwidth it needs between the cards. For LLM inference with tensor parallelism (usually limited by VRAM read bandwidth, but little exchange needed) 2x 4090 will massively outperform a single A6000. For training, not so much.