Distributed computing "across the planet" only works with low-bandwidth/low-latency tasks. Take a look at the various distributed internet projects (the Great Internet Mersenne Prime Search, Folding@Home, etc) and they are ones that process for a long time and send back a small amount of data, typically to a single server.
People use supercomputers for problems, like weather forecasting, which depend on high-bandwidth, low latency interconnects. For example, the Cray CS-Storm (which the Swiss National Supercomputing Centre will be using) supports "QDR or FDR InfiniBand with Mellanox ConnectX®-3/Connect-IB, or Intel True Scale host channel adapters" (quoting http://www.cray.com/sites/default/files/resources/CrayCS-Sto... ).
To prevent congestion, the supercomputer network topology might even be wired so each pair of neighboring nodes has a dedicated connection.
People use supercomputers for problems, like weather forecasting, which depend on high-bandwidth, low latency interconnects. For example, the Cray CS-Storm (which the Swiss National Supercomputing Centre will be using) supports "QDR or FDR InfiniBand with Mellanox ConnectX®-3/Connect-IB, or Intel True Scale host channel adapters" (quoting http://www.cray.com/sites/default/files/resources/CrayCS-Sto... ).
To prevent congestion, the supercomputer network topology might even be wired so each pair of neighboring nodes has a dedicated connection.