Setting up link failover between switches (you can't bond for 2gbps, iirc, if you are split onto two different switches) is sort of kludgy, too.
One's best bet is to just have multiple locations with low latency between them, and then just do it all in software, and leave the n+x redundancy to BGP routes. It's a lot cheaper and works just as well.
Note that this is how the Big Boys do it, as well - but it works for two machines as easily as it does two million.
You can in fact bond for 2gbps if you are on two different switches, in two completely different ways.
One way involves the use of cisco stacking switches, allowing you to use 802.3ad between two independent 'stacked' switches. You can also use the external PSU to provide redundant power to each switch (giving each switch redundant PSU's and having each switch redundant).
The second involves the use of the linux bonding driver in balance-rr configuration. This has a slight bug with the bridge driver in that it sometimes won't forward ARP packets, but if you're just using it as a web head or whatever, you don't really care about those.
The 'big boys' do use ibgp/etc. internally, but that's for a different reason: At large scale you can't buy a switch with a large enough MAC table (they run out of CAM), so you have routers at the top of your rack that then interlink. You can still connect your routers with redundant switches easily enough with vlans and such (think router on a stick).
Yes i was exactly thinking about stacking two independent switches (i've done it with Cisco 3750 but you can do it also with other brands).
The only problem could be related to the fact that doing this kind of stack you're now dealing with one "logical" system so if the firmware is bugged or someone issues the wrong command, you can have a single point of failure (but this could happen also if an HA system goes wrong by itself or because of you)
I thought stackable switches provided HA with minimal fuss, and I fail to see what's kludgy in that. I don't see any reason for bonding gigabit connections at this age where 10G connections are readily available, although afaik stacking usually is done via proprietary high-speed links.
Setting up link failover between switches (you can't bond for 2gbps, iirc, if you are split onto two different switches) is sort of kludgy, too.
One's best bet is to just have multiple locations with low latency between them, and then just do it all in software, and leave the n+x redundancy to BGP routes. It's a lot cheaper and works just as well.
Note that this is how the Big Boys do it, as well - but it works for two machines as easily as it does two million.