The one benefit you get is protection from bugs in Kubernetes itself and a reduced blast radius. Even if you could produce a secure and H/A cluster, you still leave yourself open to Kubernetes bugs and configuration mistakes such as adding a network policy that blocks all communication across all namespaces.
Multiple clusters protects you from these types of configuration mistakes by reducing the blast radius and providing an additional landing zone to roll out changes over time.
And making it so that "many clusters" look exactly like "one cluster" is one of the goals the kcp prototype was exploring (although still early) because I hear this ALL the time:
1. 1 cluster was awesome
2. Many clusters means I rebuild the world
3. I wish there was a way to get the benefits of one cluster across multiples.
Which I believe is a solvable problem and partially what we've been poking at at https://github.com/kcp-dev/kcp (although it's still so early that I don't want to get hopes up).
At a high level, almost anything you would want to use multiple clusters for can be done on a single cluster, using e.g. node pools, affinity, and taints to ensure that workloads only run on the machines you want them to. As a simple example, you can set up a separate node pool for production, and use node affinity and/or taints to ensure that only production workloads can run there.
One exception, as other have mentioned, is blast radius - with a single cluster, a problem with Kubernetes itself could take down everything.
Another issue is scaling limits. We've found a few dozen ways to break a cluster by scaling along a certain axis. (Most are not related to "vanilla" Kubernetes but the backing cloud provider or specific add-on components.)
Other management tasks are easier when you have separate clusters, such as applying environment-specific OPA policies and not having to filter them based on labels or annotations you hope everyone is using correctly.