LIGO blows my mind whenever I hear news about it. The sensitivity alone is insane -- it can detect a change in distance between its mirrors 1/10000th the width of a proton. This is the equivalent of measuring the distance to the nearest star (4.2 light years) to an accuracy of 1 human hair. Those are bonkers numbers. How the hell did we even come up with this thing?
Speaking of which, I can understand how interferometry gets you to, say, 1/1000th of a wavelength, but the wavelength is 1000nm. How do they go from 1nm to 1/10000th the width of a proton? What's the trick?
Is it an integral transform thing, like how spectrum analyzers can claim super low noise floors if you sort of gloss over the "noise is proportional to badwidth" part and look in a tiny bandwidth without normalizing?
Cavities. We trade off bandwidth for peak sensitivity by sending the same light back and forth between mirrors in the arms of the interferometer hundreds of times. As the gravitational wave passes, the same light samples it over and over and picks up additional phase shift, enhancing the signal. The downside is that we can't see gravitational waves at signals far above the cavity pole frequencies at a few 10s of kHz, but the most promising sources we aimed at when the detectors were designed were considered to be below that.
We also use techniques called power and signal recycling to enhance this bandwidth-sensitivity tradeoff even more. Combined these techniques give you what remains between your 1/1000th wavelength and the actual sensitivity of LIGO and Virgo.
Great question! The precision is not just better than the wavelength of the light. It's also way smaller than the surface roughness of the mirrors! How does it work?!
Like you suggest, and adding to what sleavey mentioned above, I would say the answer is: averaging over time and space. The laser beam is pretty wide, so it averages over a significant area of mirror surface. (The optical system also selects one spatial mode of the laser beam.) And the stated displacement sensitivity ("1/10000 the width of a proton") only occurs when you integrate over the sensitive frequency band.
Interferometry has a long history in experimental physics. The Michelson–Morley experimental design scaled up to large distances and using modern technology, like lasers and computers, gets you LIGO.
LIGO is a huge milestone. Turning it on is like the moment Galileo pointed his telescope to the moon. This is a new class of instruments observing the universe in a medium that was never utilized before. Using gravitational waves can observe things that cannot be seen before, like stars behind the dust clouds.
It opens a new window to the world. We might finally be able to “see” dark matter. May be able to see the gravitational imprint from before the Big Bang, the gravitational leak from extra dimension or other universes.
Galileo's telescope and HST all work on electromagnetic energy. LIGO work on gravitational wave. LIGO is the beginning of a completely different class of instruments.
I thought LIGO is kind of crude, i.e. it's being used to measure in the scale of blackhole level gravity. The dark matter experiment is trying to measure the gravity of a photon? May be too optimistic?
Sounds like they're looking for dark photons interacting with the detector, and, thanks to its extreme sensitivity, able to place upper bounds on the strength of the interactions (since none were observed). This means dark photons have very very weak interactions with normal matter.... or that they don't exist. :)
the Gravitational Observatories are so impressive to me - I can remember reading about them being theoretical well before LIGO ever started construction, and the fact that theyre here and working just like we expected is so amazing. I can't wait until we have LISA in space!
But its so strange when we shut it off to do upgrades and stuff - like I totally understand why we have to do it, but its like we finally turned on a microphone and could hear things that were always happening but we could never observe before, and then we turn it off for a little while - the thought that there are events that are going on right now that we will never be able to detect because we arent listening gives me major FOMO.
LIGO collaboration member here: we focus on maximizing the cumulative number of detections over time, and it turns out that by turning the detector off for a year to upgrade it, we can increase the sensitivity so much that a year after that we will have more total detections than if we had left it running continuous for two years. So you also have to think about all the distant/faint signals we would never be able to detect because we kept listening at too low a sensitivity...
(We do make sure that we have a dramatically less sensitive sister detector in Germany, called GEO, listening whenever we're not so that we'll see something really close and loud, like a galactic supernova, even when LIGO is offline.)
I sincerely love that your reply to FOMO was to point out how his FOMO should be even worse. This is exactly how my brain works when I'm optimizing something: I have to balance one obsession with all the other things I need to obsess about.
Thanks for the info! I expected it was something like this, that you expect the increased sensitivity is worth it. I just wish we had more observatories running in different sensitivities - we've just got to get the cost down, thats all :)
> the thought that there are events that are going on right now that we will never be able to detect because we arent listening gives me major FOMO.
I can relate to the sentiment but keep in mind that human timescales are downright puny compared to cosmological timescales. And there's lots of stuff going on all the time (lots is an understatement) so I would say you won't lose much turning off the detector for a year.
I totally understand this from a rational standpoint, but if you take that to its logical conclusion, you expect that nothing will happen in that year that is very interesting or rare, so why look at all?
Sure, our timescales are nothing, making what we have even more valuable, no? We've missed out on a whole lot of observations - we have a lot of catching up to do!
Why would you have FOMO? Why is any particular gravitational wave important enough that you'd worry about missing it? I'm not trying to hassle you here, I'm genuinely perplexed by this attitude.
I used to work in a library and we all had a similar feeling about books. They were all valuable and to be preserved! Objectively, that's not practical. But that base irrational feeling is still useful and important.
Technically we've missed at least four billion years of gravitational wave events, so a week of shutdown for an upgrade isn't going to make a huge difference to that.
Yes, that's where the practicality comes in. But I'm sure there are people who have FOMO over all that data collection we missed over the last 14 billion years. And that's good! Maybe somebody will come up with something clever that is better than nothing. E.g., all the CMB work: https://en.wikipedia.org/wiki/Cosmic_microwave_background
My crude searching tells me there are ~3 supernovas in our galaxy every century. Miss one now and the next might wait for the next generation of astrophysicist.
Close by, interesting astronomical events are rare.
The last decent naked eye supernova was the crab nebula in 1054.
It would be a real shame to miss the next one in this galaxy.
Other "rare" (non-gravitational wave) events I can think of are: the Shoemaker Levy comet hitting Jupiter, the Carrington Event in 1859, Betelgeuse dramatically dimming (last year).
Last 'decent' naked eye supernova was surely Kepler's Supernova of 1604.
And the recent fading of Betelgeuse wasn't all that unusual for a fairly typical red supergiant pulsating star.
I replied to a sister comment similarly, but the one resource that we are severely limited by is time. There were events that were far more common in the early universe that we may never see now. Rare things could happen at any time.
The reality is that we can't observe 100% of the time for resource constraints and that the cost/benefit of upgrading is totally worth it in the long run - rationally speaking its the right move. I will just always wonder what we are missing out on that we might never have the opportunity to observe again - or maybe not in our lifetimes.
question: according to Wikipedia, LIGO was built between 1994-2002, and didn't detect gravitational waves until 2016.
I never heard about LIGO until the discovery in 2016, so for almost 20 years it was off my radar, so to speak.
What multi-decade experiments are being created today, which will be ready to produce amazing results in 20-30 years? What's currently under construction, but I'll never hear about it until 20 years from now, when it makes an amazing discovery?
From the talk by the Caltech professor leading LIGO, the theory was good on paper but the engineering problems were very tough. The platform to make measurement has to be absolutely still. But the Earth always has some movements due to seismic activities. For years they just couldn't get the passive vibration isolation tech stable enough for the measurement to work. The project was almost canceled. Finally they went back to the drawing board and partnered with some companies to build new active vibration isolation technologies. It took years to get it working.
Their active vibration isolation technology is insanely good. It basically detects tiny seismic movement of the Earth far away and actively compensates the stable platform. This is one instance of scientific project spawning off new technologies, which will have many other uses in the future.
My head instantly went to fusion experiments[0]. There are planned experiments in that space which, if they go ahead, won't produce results until the 2040s.
The resolution of these instruments is astonishing - and literally a new window on the universe.
Obviously, being able to detect amplitudes so small is key to this whole project, as the sources are so distant (and presumably the inverse square law applies).
This makes me wonder how these phenomena would appear much closer to the events - how close would we need to be to perceive with our senses the passing of a gravitational wave, and what would it look like? I'm guessing some kind of passing tidal forces would be felt — has anyone done modeling to figure out what that might be like?
How close and how much amplitude (or would frequency be the killer?) would be required to start damaging ordinary material objects? Is it so close to the source that you're already doomed in the black hole's grip anyway, or would an event at the center of our galaxy be perceptible here? Would the waves rip apart nearby stars (for what value of nearby), or be noticeable in their spectra as some kind of ripple? It'd be cool to get some kind of a sense of the scale of these events' affected zone.
The article mentions the new KAGRA detector in Japan joining the group. Does anyone know: how does the accuracy improve as more detectors come online?
Will we see a day where we have 20, 50, 100 detectors around the globe and events are near-certain because so many detectors see them? Or is the diminishing returns, and 4 detectors is already too many?
Additional detectors improve our ability to detect somewhat (assuming they're of similar sensitivity -- otherwise they can actually hurt our overall network sensitivity!). But the _real_ advantage is in source localization to guide multi-messenger (optical/gamma/neutrino) followup, which is where many of the most important discoveries will come from. It's like triangulation.
Given that an observatory costs on the order of ~$1B to build and operate for a few decades, we probably won't see more than five current (second generation) instruments (2x LIGO + Virgo + KAGRA + LIGO India).
There are also two proposed but not yet funded 3rd-generation ground-based instruments ("Cosmic Explorer" and the "Einstein Telescope"), one planned space-based instrument ("LISA"), and early efforts at proposing a future moon-based detector (the Gravitational-Wave Lunar Observatory for Cosmology, the Lunar Gravitational-Wave Antenna, and the Lunar Seismic and Gravitational Antenna).
To get to tens or hundreds of detectors, someone will have to invent a fundamentally different technology that can be produced at dramatically lower cost. Maybe next century...
Some quantum-gravity theories predict additional polarization modes that general relativity doesn't, so such measurements may start ruling particular theories in or out.
You might be interested in this talk on the future of GW detectors and the resulting science prospects: https://www.youtube.com/watch?v=iet6pS4gxCk (esp. from ~25:40 to the end).
The next stage should be putting them in space. Vibration on earth is a huge problem. Space has a much more stable environment. Also the distance between the laser detectors can be far, greatly enhancing the magnifying power.
In terms of the science, land-based and space-based GW detectors are complementary, as they detect GW waves at very different frequencies. One doesn't replace the other.
There are also serious (if obviously longshot) efforts by colleagues of mine to propose moon-based GW detectors: https://indico.ego-gw.it/event/263/
Isn’t to detect different frequencies it’s a matter of varying the distance between the detectors? Space based detectors can be placed arbitrary close or far, moved at will. Ground based is fixed. Also I remember isolation from Earth’s vibration was a huge if not the biggest engineering challenge. There’s no such problem in space.
Interesting how, even with three LIGO observatories, there is still a 15% false positive rate. How many more observatories are needed to reduce this to a negligible number?
LIGO collaboration member here: this is a tunable knob, independent of the number of detectors, that we've set very intentionally based on feedback from astronomers who want to follow up with optical/gamma/neutrino instruments following our BNS detections. We could reduce our false positive rate to a negligible number today, at the cost of _not_ reporting many likely discoveries.
Also, a minor point but there are only two LIGO detectors online at the moment, with a third sister detector in Italy (named Virgo), and a fourth coming online soon in Japan (named KAGRA). There does in fact exist a third LIGO instrument, but it's currently mothballed, awaiting construction in India.
Yes, we carefully explain the statistical confidence of our detections in each discovery paper, and there are dozens of methods papers that go into excruciating detail on those techniques.
Prior to our first detection, the overwhelming prime directive of our collaboration was _not to make a false detection_ and we went to insane lengths to avoid one; e.g., we had a small team of people secretly injecting false signals -- "blind injections" -- into our data, so that we all expected to be regularly seeing them and wouldn't be tempted to gin the analysis to find a detection where there wasn't one. (Amusingly, because of this, it took weeks for many of us to believe the first detection was real, even after the blind injection team swore it wasn't one of theirs. In the end, we charged an independent team to do a forensic analysis of everything from the security cameras and seismometers in the observatories, to every computer and disk the data passed through, to convince ourselves this couldn't possibly have been maliciously injected by hand by a rogue scientist. It was a wild few weeks!)
Now that it's clear gravitational waves exist, however, we focus on optimizing the false alarm rate for astronomers (who want it well above zero so they don't miss anything) rather than optimizing for ~zero false detections.
Unique as far as I know, at least in a scientific collaboration like ours. But we were unusually sensitive to the risk of false detections given (a) skepticism of LIGO's prospects from other physicists, and (b) the lingering stain of a false detection claim, from a bar detector, a few decades earlier.
We believed that if we had announced a discovery that turned out to be wrong, it would probably have meant the end of our experiment, and in practice the end of the field, at least for a long time. It was fortuitous that the first detection turned out to be gold plated and unambiguous, otherwise we would have probably published a bunch of "we saw something interesting but can't claim it as a GW discovery" papers on the next few weaker events before we would have felt comfortable making a confident claim.
Not quite the same lengths, but AIRC LHC uses two detectors with substantial design differences, and partitions the teams from sharing information for some of what they target, as a way to reduce systemic errors.
When I see LIGO graphs they always appear linear - the change in frequency is linear with time.
But with time dilation as black holes near each other, shouldn't the frequency change be exponential? Or does it cancel out - the frequency goes up as they get closer, and then time dilation lowers it back as it dilates into infinity?
I have a basic problem with Sabine's claims. In a response comment she says: "No signal analysis can confirm that the signal was of astrophysical origin."
I know nothing about the actual hardware. But I can reasonably speculate that LIGO can afford to have local atomic clocks. So they can timestamp their observations to nanosecond precision.
Given the finite speed of light, the candidate astrophysical events are necessarily detected many milliseconds apart. There can only be two ways these potential events would correlate:
1) there are so many many possible false positives constantly happening that there is some reasonable probability of this correlation occurring by chance.
or 2) a highly sophisticated "goof" or fraud. Someone could presumably set up some local source near each detector and spoof a signal. E.g. carefully "wiggle" large masses, each only a few km away from each detector.
So, if it's not astrophysical, can Sabine tell us which it is? False positive? Goof?
I thought that 2019 article is now "answered" by the fact that we have multi-messenger observations of gravitational wave events that confirm fading light curves.
I believe we still only have one multi-messenger observation. One observation has infinite variance =P.
They're ALSO not independent observations, though. The LIGO observation was made, and then the directive was "go look for correlating events". That's a dependent observation.
To be truly independent, they would have to inject synthetic data (say 3x) into the observation reports and make sure that the multi-messenger results don't correlate with the synthetic data, much like GP says they did "at the beginning of LIGO" with internal (non-multimodal) signals.
This would have the side benefit of making the analysis pretty simple to do (just a chi-squared analysis) and you don't have to have a PhD in signals analysis with a specialization in the filters used by LIGO to believe with a high confidence that 1) GWs exist and 2) LIGO is actually measuring GWs.
GW fanboy here. IIRC, additional detectors provide better ability to locate the GW source so that other telescopes can also capture an event. They also improve polarization measurements. The number of detectors doesn't determine the false positive rate which is just a user selected point on a receiver operating characteristic curve. In general, we get more pay off from improving detect sensitivity than from having more detectors (that's why GEO 600 was more useful for the tech it developed rather than its most recent observations).
Wikipedia lists S200114f (on page https://en.m.wikipedia.org/wiki/List_of_gravitational_wave_o...) in a way that to me makes it sound like there’s substantial uncertainty about the cause of it; is that maybe because Wikipedia is out of date and people figured out the (not unusual) cause since, or is maybe just wrong, or maybe I misunderstand what you mean, or I misunderstood what the people who labeled the entry for it on Wikipedia meant?
The original solution field equation is for a constant speed drive, which I believe does not emit any gravitational waves. I’m not sure how much work has been done on the formation of the bubble and what effects that would produce, or indeed how a ship could change its heading, which are events that I think should produce gravity waves.
Do gravitational wave interferometers create a single pixel of data, or do they create an "image" of gravitational distortion in a 2D region? The first few Google/Wiki hits talk about the physics of interferometry, but not the actual resulting output from real hardware. I'm assuming there is no image, otherwise there would be some associated with the articles, rather than artistic renditions?
You can think of each individual detector producing a single-channel audio signal. By combining the signals from multiple detectors it's possible to determine where the signal is coming from. But the output is neither a picture nor a single pixel: it's a brief blip of a few seconds of audio-frequency time series.
A decent analogy is to think of each LIGO detector not as a camera but a microphone.
Or a 3 pixels camera whose pixel color changing is what matters :D
The only difference between a camera and a mic is the number or vibrating thing it cares about (mic only cares about the vibration of its single membrane, cameras create millions of membranes sensitive to photon vibration on a grid)
LIGO 3 interferometers care about the time-variation of the difference of distance measure in 3 groups of 2 mirrors. So it's more than a single mic, and it's a derivative of 2 distance measures, in time. It would be like a 3-pixel video, with white as a baseline for the 3 pixels, and it would varies towards green or blue depending of the negative or positive difference (random colors) between the mirror distances.
The microphone analogy is particularly apt, because the signals are also audio-frequency. You can listen to them. Sometimes in the control room we play the output on a loudspeaker. It can help in tuning the instrument (most of what you hear is noise).
There are devices that are arrays of microphones that can be used to detect what direction a noise comes from. The Army uses one to triangulate rifle fire, and Boeing started using one circa the 787 to locate cabin noises (planes have a sound insulation budget, and positively identifying the ingress points allows for better ambient noise levels).
>> it's a brief blip of a few seconds of audio-frequency time series.
A bit more time and they should be able to point telescopes in the general direction of the event prior to a merger. Not sure what kind of directional precision can be obtained, nor if there would even be much to see.
I'm pretty sure that they'd have to be pointed in the right direction first.
There is very little time between the start of something detectable, and its finish. But in the case of a neutron star merger, we were able to point other telescopes and see the resulting magnetar for several hours.
They measure the total distortion over the baseline, so in that sense it's a single pixel. But I think measurements from several interferometers can be combined to give some very limited spatial resolution.
shameless plug on a project I did with a High School physics student plotting gravitational waves inspired by the "joy division" plot: https://moleksak.com/ligo/ ... we'll have to update it with the new data!
There is no upper limit to the wavelength of gravitational waves. However, the lowest frequency that we can currently detect is around 10-100 Hz. Diving the speed of light (3e8 m/s) by (10 Hz) gives 3e7 meters. 30,000 km. The upper end of our frequency range is currently around 8 kHz which corresponds to 37.5 km if I did the math right.
LISA will be sensitive to much lower frequencies (longer wavelengths). NanoGrav also searches for these lower frequency signals.
We expect most gw's to be of very low frequency. High frequency g.w.s require tremendous mass and acceleration to generate. They are only generated in the final moments of black hole collisions as far as we know.
The orbital period of the merging objects is ~ 1 ms near the end, and gravitational waves propagate at the speed of light, so I'd estimate that the wavelength would be ~ 3e5 x 1e-3, or in the neighborhood of hundreds of kilometers.
OK I probably got some wild thoughts. I was thinking, since light has different wave lengths and human can't see many types of light because of that, maybe dark matter is similar?
But it’s a very good question. It leads to what’s the nature of gravitational wave. The frequency is certainly a key characteristic. It can lead to how LIGO works.