Speaking of which, I can understand how interferometry gets you to, say, 1/1000th of a wavelength, but the wavelength is 1000nm. How do they go from 1nm to 1/10000th the width of a proton? What's the trick?
Is it an integral transform thing, like how spectrum analyzers can claim super low noise floors if you sort of gloss over the "noise is proportional to badwidth" part and look in a tiny bandwidth without normalizing?
Cavities. We trade off bandwidth for peak sensitivity by sending the same light back and forth between mirrors in the arms of the interferometer hundreds of times. As the gravitational wave passes, the same light samples it over and over and picks up additional phase shift, enhancing the signal. The downside is that we can't see gravitational waves at signals far above the cavity pole frequencies at a few 10s of kHz, but the most promising sources we aimed at when the detectors were designed were considered to be below that.
We also use techniques called power and signal recycling to enhance this bandwidth-sensitivity tradeoff even more. Combined these techniques give you what remains between your 1/1000th wavelength and the actual sensitivity of LIGO and Virgo.
Great question! The precision is not just better than the wavelength of the light. It's also way smaller than the surface roughness of the mirrors! How does it work?!
Like you suggest, and adding to what sleavey mentioned above, I would say the answer is: averaging over time and space. The laser beam is pretty wide, so it averages over a significant area of mirror surface. (The optical system also selects one spatial mode of the laser beam.) And the stated displacement sensitivity ("1/10000 the width of a proton") only occurs when you integrate over the sensitive frequency band.
Is it an integral transform thing, like how spectrum analyzers can claim super low noise floors if you sort of gloss over the "noise is proportional to badwidth" part and look in a tiny bandwidth without normalizing?