Hacker News new | past | comments | ask | show | jobs | submit login
Mission to reach and operate at the focal region of the solar gravitational lens (arxiv.org)
340 points by WithinReason on July 28, 2022 | hide | past | favorite | 141 comments



"Using a meter-class telescope one can produce images of the exoplanet with a surface resolution measured in tens of kilometers and to identify signs of habitability."

Here's gmaps satellite view at ~10km/px

EDIT: fixed permalink https://www.google.com/maps/@41.4220797,-93.7912673,6877284m...

also: https://imgur.com/a/JwHmaIY

Wow.


When you can resolve territorial borders and labels, that's a pretty strong sign of intelligent life.


Jokes aside, if they have artificial lighting, it should be visible.


Holy smokes, when I saw the resolution I got excited. When I read this, I went wow!!

And the reason we as a society aren't doing this already is what? This thing should be consuming a percent of two of US GDP until it's done.


> And the reason we as a society aren't doing this already is what?

Because you have to park it 900AU away from the sun (Voyager 1 is at 147AU after 40 years and not in a solar orbit) and it has to have a special kind of coronagraph that flies as a separate spacecraft from the telescope (hasn't been invented yet) and your telescope is limited to looking at objects directly opposite the star from the orbit of the spacecraft. At 900AU it would take decades to slew such a telescope to view a new object in the sky.

At the current time this is basically a science fiction project that promises that if we put a very special spacecraft in a very specific place then it is within our technological capability to image an exoplanet. That doesn't mean it's even a remotely practical idea. It's interesting to talk about but useless to get all worked up about why it's not getting funded.


Indeed, there are a lot of engineering challenges related to the distance. Solar power is much, much dimmer at 900 AU -- that's like 20x Pluto distance -- so you'll need to bring something like an RTG. But your RTG will decay while the spacecraft travels (Voyager's RTG has already lost 30% of capacity). Once there you'll need a lot of power to transmit data the back with any sort of speed or reliability. All of this is on top of the usual telescope-reliability concerns; in many ways it would make the James Webb telescope mission look straightforward.

This is one of the reasons for papers such as this one: exploring what meaningful solutions to these problems, and meaningful mission profiles, would actually look like in practice.


> All of this is on top of the usual telescope-reliability concerns

I'd hope less of a risk of micrometeorites (though it would have to look on its own for incoming projectiles and take evasive action). Then again, 900 AU is in the middle of the Oort Cloud so perhaps there's a lot of small junk out there too.


> But your RTG will decay while the spacecraft travels (Voyager's RTG has already lost 30% of capacity).

So just bring 50% more? I don't think the weight of the Pu-238 is a key limiting factor on this mission.


I'd rather put the money into solving climate change so we're around long enough to meet the other society!


I think this mistakenly assumes that tax money is spent on the things that are a priority to the society that pays the taxes. This is not how tax money is allocated at all.


I've been told that some national borders make it clear about the difference attitudes toward the environment. That is, one side of the border is filled with trees and the other is denuded by a population that doesn't think in the long term and simply trashed the environment.

That kind of difference might be visible through one of these telescopes, especially if the border is long.


Yea that's a good point. The classic example is Haiti and the Dominican Republic, where the DR has trees on the border in contrast with often a bare landscape just on the other side.

Though I've heard this is actually overstated.


You can see it more clearly on the Israel-Egypt border


I think that's just different sources of imagery with different lighting. At some zoom levels it's more or less apparent, and it's mostly the same desert on both sides: https://www.google.com/maps/@31.03477,34.3411577,15440m/data...


You can see the national borders of DPRK from space at night.[0]

[0] https://www.npr.org/sections/thetwo-way/2014/02/26/282909885...


Night sky must be great in NK!


There's also the less prominent East/West Berlin border which is still visible due to the different light bulbs used in the different format street lights.

https://www.businessinsider.com/divide-between-west-east-ber...


West Berlin still has gas light, slowly transitioning to LED.


Didn't they say the spacecraft will have to move around to get each pixel? That means the planet would turn between pixels and you wouldn't get a coherent image. It would also take a very long time to cover 10's of thousands of pixels. May need to send many imagers and combine the results...


I wonder if you could use just 1 spacecraft and actually exploit the rotation of the planet to "scan" the planet surface in 1 dimension. The movement of your telescope would provide the other dimension, so in a slow fly-through you could get a 2D image of the planet surface. So e.g. imaging a 12000 km (Earth-size) planet at 10 km/pixel would take 1200 planet rotations, each providing a row of pixels, taking about 3 years assuming it's rotating at the speed of Earth. You would just have to know ahead of time the orientation of the planet's axis of rotation so you approach the "focal region" from the right angle.


Main thing is that you don't get an image, but a single line of pixels.

Such a line taken of Earth is likely to have nothing in it but ocean and clouds.

If you sent a swarm, each would get its own line of pixels, as many lines as spacecraft.


A few years ago I saw that someone had taken a photo of their back yard using a single light sensor.

The idea was that at each moment of each day, the sun was in a different spot in the sky, and each day it shifted toward or away from the equator. So they stitched the samples together mathematically based on timestamp and a 3d projection of the earth. You could tell it was a backyard, but it was very gauzy, like that moment when you first wake after a nap.



This is how scanners (the paper/photo kind) work, a single row of pixels, and you can make a wicked camera from one: https://www.engadget.com/2014-12-30-scanner-camera.html


The single line comes from the trajectory of the spacecraft as it flies through the focus point, reading one pixel at a time. Stopping and rastering is not possible and not close to possible with current technology. Of course, you still want a raster grid, but you have to obtain the grid by sending a swarm of different spacecraft. They each collect a line as they fly through the focus point, and then you can assemble the lines into a grid.


Except, it is not.

The spacecraft would be taking one pixel, then awhile later another pixel, then another. It could get all kinds of dimensional details at each pixel, with full spectra, polarization, what have you, but just one at a time, unlike your scanner bed, which collects many thousands simultaneously, with as many separate optical sensor elements.

After it has collected the whole series, that would be the one, thin row of pixels sent home.

The next spacecraft over could collect another line and send that, watching a different bit of planet surface. Two thin line samples would not be very informative, but hundreds could be.

It is actually, potentially, a bit better: the planet would be rotating while the spacecraft moves between pixel points, and the spacecraft could continue sampling the entire time, getting at a series of adjacent bits of the planet, until the planet rotates to where the first was sampled, and you can sample the next spot over, looking from an infinitesimally different angle. So, possibly, it is scanning a line across the planet's surface, and then later another line nearby. But each line would cover almost the same line as last time.

Of course what it really would do is just watch the planet continuously for years, and send literally everything back, where the planet's presumed rotation could be puzzled out, and then everything could be stitched together afterward. Data from a single probe might suffice to yield an image.

In effect, the spacecraft moving is sweeping across one axis very, very slowly, and the planet's rotation is sweeping on another, albeit probably not one at 90 degrees, but at some angle to the first. (It would be very unlucky for the two to align, but wholly possible.)

Since all this would happen over days and years, the image would end up with an average of that many years' weather, so nothing like a snapshot. But if cloud cover ever breaks, any continents shores would anyway be sharp.


You're describing the same concept.

> The spacecraft would be taking one pixel, then awhile later another pixel, then another.

This is what each pixel in scanners CCD does. In his example, the swarm would be like the row of pixels.

> unlike your scanner bed, which collects many thousands simultaneously

The swarm would.

> watching a different bit of planet surface. Two thin line samples would not be very informative, but hundreds could be.

You would want to do this simultaneously, like in a scanner, with some distance between the sensors, so they could sample different "thin lines" of the planet, which could be stitched together to make one image, just as in a scanner. No need to wait. Regardless, a rigid physical or temporal lock isn't really required here. The concept is the same. You could arrange the swarm as a line, diagonal, grid, whatever.


Simultaneity would be pointless, because the next pixel over, from any given probe, would be from (earth-) weeks later. It would pick up samples from each planetary day in between, too, but each would almost completely overlap the previous day's, stepping out until the probe has gone enough distance to be sampling a separate bit of surface. Some information might be teased out of the overlapping pixels, but they would all be from different days.

It would be bad luck to choose a Venus-analog to map.


> Simultaneity would be pointless

I might have some fundamental misunderstanding, because I don't understand how this could be.

> stepping out until the probe has gone enough distance to be sampling a separate bit of surface.

Why not have a probe already at that distance, so it's gathering something of that surface?

My assumption here (knowing very little of gravitational lensing) is that the gravitational lens still has the concept of an "image surface", where a translation in that image surfaces can be mapped to some translation in the projection of the thing being viewed. Are you saying that if I put two probes up, with some appropriate spacing between, they can't collect different stripes of the same surface, at the same time?


Same time for the probes is easy. Same time for the planet is harder.

But if the probes are recording continuously and sending it all back, you can probably identify points that are simultaneous on two tracks, after the fact.

But that gives you just a scattering of points on that day. The next pixel over, for each probe, will be for a different planetary day. Your image, stitched together from all the lines returned by all the probes, is an image smeared over at least as many days as pixels in each line.


Having access to a single pixel of a rotating object is enough to get an image. It's more or less the 3d equivalent of an inverse Radon transform


These are all easily solved problems we are familiar with compared to getting spaceships there accurately. But yes usually these plans call for a swarm of these ships


That link just opens a standard map of the whole globe in a flat projection - I think you might need to use the sharing link generator to share what you intended.


I think that's what they wanted? Although it's still a strange way to communicate that given that it'll change wildly based on screen resolution. A screenshot would probably be a better idea.


I think the link intended to show 10 km/pixel, to give an idea of what it would look like. It shows a scale of length 1000 km to me, if it is 10 km/pixel then there should be a 100 pixels within that length, which seems okay as a sanity check. But yeah, it depends on the screen resolution as well.


I'm not sure if it's possible to easily share the correct settings in an URL, but:

In the Layers menu, set it to Satellite, Globe view, and turn off Labels. Then, zoom until the earth is ca. 1275 pixels wide


Thanks, fixed!


and here's that at night, seems like a similar resolution:

https://earthobservatory.nasa.gov/images/79800/city-lights-o...


I think you might want to switch to satellite view. There are definitely human-made structures that you can recognize at 10km/px.


Weird, fix didn't work either :( Just use the imgur :)


For those like me who need some kind of reference for the distances mentioned (548-900 AU), Voyager 1 is 156.5 AU from earth today.


Yeah, the solar lensing point is WAY the hell out there: since 1 AU is 8 light-minutes, so the 548 AU minimum works out to 73 light-hours away. When LIGHT takes half a week to make a one-way trip, you're in the deep space boondocks, folks.

I haven't read the paper yet, but this thing would have to have a fair amount of nuclear power, and comms would be a challenge as well. As the abstract mentions, though, while the project has a high degree of difficulty, there appear to be no complete technology showstoppers to actually doing this, so it's at least as doable (and considerably cheaper than) a von Braun-style centrifugal space station in Earth orbit.

It'll be interesting to see if the idea gets any traction...


For those looking for a live look at Voyager status: https://voyager.jpl.nasa.gov/mission/status/


The solar system explorer view they have on that page is fantastic!

I recommend clicking on the "solar system" toggle in the bottom middle of the view. It gives you a real sense of the planets, probes, asteroids etc that are flying around our solar system.

Also reminds me of looking at air traffic control maps and what that might look like once intra-solar system space travel becomes routine.


and 900 AU is 0.014 light years. so you can get 10km resolution on a ~7100:1 object to focal-point distance (what’s the right term for this?).

seems pretty good.


I wonder how much faster we could get there with solar sail and laser boost from Earth


One issue would be slowing down when you get there.

You'd need to carry a deployable/detachable mirror with you to reflect the laser back at the craft, but that mirror itself would also get accelerated further out, which means having to correct for that, etc., etc.


For this mission, you don't need to slow down or stop. Just keep taking image data starting at 548 AU and keep going until you're at 900 AU.


I thought you have to move laterally to get the pixels? "The data are acquired pixel-by-pixel while moving an imaging spacecraft within the image."


You can already do that while slowing down. Might require some image correction though to account for that in the pictures.


With a solar sail you can just start tacking, right?


I think a solar sail mostly works at a broad reach or a run, so it would be more of a jibe than a tack ;)


I don't know anything about sailing but doesn't tacking depend on some sort of keel?


With a solar sail you can "tack" by reflecting photons against your tangential velocity – but that only works if the tangential component is indeed what you want to shed. Getting rid of radial velocity is more difficult, and in a hyperbolic (escape) orbit radial is mostly what you have.


Probably not at 540 AU. Not many photons out there for the solar sail to grip.


Lol, nice


Can't you use the same lens to focus the propulsion laser?


Without specifying the size of the laser, anything from "even slower" to "whole trip in just under 16 days".

https://en.wikipedia.org/wiki/Breakthrough_Starshot

http://www.wolframalpha.com/input/?i=548%20AU%2F0.2c


That's 16 days if the average speed is 20% lightspeed. In reality you need to speed up, and then when you're halfway there, slow down again.

Which raises the other question: how would we slow this thing down so it just doesn't keep going past the 900AU mark?

Regardless, I guess we would be fine picking a slower speed that would get it there in a few years, which might be significantly easier to achieve.


Breakthrough Starshot aims to do all that acceleration in 10 minutes.

It’s an ambitious project.


Hah, the latter number requiring some significant percent of earths mass being converted to energy or something?


Not really, it's the velocity the "Breakthrough Starshot" probes would reach. They propose[1] that launching each probe would take 84 GWh, which is not super much (about 15 times more than a space shuttle launch), but of course the Starshot probes would be much lighter than this proposed telescope so it's not directly comparable.

[1] https://youtu.be/KIDuXQHt8pk?t=1562


Accelerating a meter class telescope to .2 c is well beyond our current capabilities, but it's nothing like that fast.


If we assume the meter class telescope + power supplies and whatever else masses ~ 1 metric ton, it would take 1,853,298,442,530,598,439 Joules - 1.8 quintillion - 1.853 * 10^18 - to accelerate it to .2c. Only ~ 442 Megatons of TNT. Keep in mind, that is assuming 100% efficiency, which would be impossible with a light sail or any other known technology.

With all the various inefficiencies in power collection/generation, laser generation, momentum/power transfer, etc. we'd be talking probably somewhere around 5% end-to-end power transfer - if we were lucky. Which is still way better than the rocket equation (probably).

So to get the required 1.853 * 10^18 J at 5% efficiency, we'd need say 20x more power at earth to accelerate it (if 5% efficiency). So 3.706 * 10^19 J. Which starts get more concerning, at 8,840 gigatons of TNT.

Let's take the most efficient means we can imagine to produce energy, direct matter annihilation. Annihilating 1KG of mass (using 500g of Anti-matter, 500g of Matter) produces 8.986 * 10^16 J of energy. If we could somehow feed the resulting energy directly into the laser for accelerating the craft, and assume near 100% efficiency in doing so, before laser losses - we'd only need roughly 1000 KG of matter/anti-matter to do so.

Not bad!

But wait, our more likely end-to-end efficiency is at best 1%. Hmm. Which would require 100 times the input energy to spacecraft acceleration. 44 Gigatons of TNT or 1.853 * 10^20 J.

5KG of matter/antimatter.

Which is definitely not a significant fraction of earths mass, but yikes. I wouldn't want to pay that energy bill!

51,480,512,292,500 kWh (the 'wall plug equivalent') at my current rates would be $26 trillion dollars!


The fastest way there would probably be with a nuclear propulsion engine, as researched in Project Orion.

https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...


Princeton FRC


There's a nice article on centauri dreams just published looking at solar and nuclear options for such a mission.

https://www.centauri-dreams.org/2022/07/26/getting-there-qui...


A much better idea is the terrascope, using Earth's upper atmosphere as a refracting lens. This wouldn't require a multi-decade mission, nor a huge sunshield.

https://arxiv.org/abs/1908.00490


An excellent video on this: https://www.youtube.com/watch?v=jgOTZe07eHA

Earths atmosphere and weather affects this sort of telescope IIRC and will filter out some lightwaves but seems like a much easier win


Couple of examples shot from the ISS - https://www.youtube.com/watch?v=1t8UNxY2bgQ


Interesting. I wonder if there are planets with atmosphere in our solar system that would allow testing this from earth orbit (eg: iss/satellite looking at objects lensed by Jupiter etc, because the refraction and distances work out).


the sunshield would not be huge at 900 AU


The fact that this telescope takes 30+ years to get into position and take a photo, and that it in its lifespan can only look at a single planet, is a real disadvantage...


You just have to mass produce these things and send them to interesting targets in parallel.


Or make the sun heavier to move the focal point closer :-D

(aside from the obvious impossibility, would it work? I'm sorta assuming that heavier mass => more distortion => closer focal point. Is that correct?).


Yes... Or smaller.

Light from far enough away is focussed by any mass, and will appear as a 'ring' around the mass. But if the mass isn't heavy enough the 'ring' that you see might be behind the object itself.


2042: Earth receives a message from an alien intelligent species 32 light years away.

2109: Aliens receive a reply, which includes a map of their own world, including where their largest cities are located.

I'm just saying, this is totally a possibility! We can creep on the neighbours!


> Aliens receive a reply, which includes a map of their own world, including where their largest cities are located.

I think we'd interpret this as a threat, maybe, like a target map. They might too.

Probably best to reply with something more innocuous, like the Fibonacci sequence.


yeah, that was my reaction as well.

"We hear you, and just letting you know, we know where you live and have started targeting solutions. Just so you know, we're all armed down here!"


If the aliens are as willing as we are to interpret everything through a lens of violence then we likely have no chance in the first place IMHO.


If mutually assured destruction worked for human tribes, maybe it'll work between interplanetary tribes as well.


I think you folks all need to read The Three Body Problem and learn about the Dark Forest Theory :)


Yeah, and I bet after we send that, they stay off our lawns, right?


Or they fling a tungsten rod at us.


If using the Sun as a lens is practical for us, it's likely to be practical for them. They can manage radio transmission, so it's a matter of time before they image Earth as well : relativity, rocket science and electronics is not far off from radio transmissions in term of knowledge.


I'm reading The Three Body Problem series right now. Let's not mess around. ;)


Sort of an interstellar "I know where you live".

Way to make friends !


Video about it by launchpad astronomy https://youtu.be/NQFqDKRAROI


There’s a ton of astronomy YouTube channels but this is now my favorite, dethroned pbs space time. The video that converted me was his supernova explanation https://youtu.be/RZkR9zdUv-E - for all the content out there no one else has anything this good about supernovae.


Thanks, that was nice.


Starshade deploying:

https://exoplanets.nasa.gov/resources/1015/flower-power-nasa...

"The "petals" of the "sunflower" shape of the starshade are designed to eliminate the diffraction that is the central feature of an Aragoscope."

"The starshade is a spacecraft designed by Webster Cash, an astrophysicist at the University of Colorado at Boulder's Center for Astrophysics and Space Astronomy. The proposed spacecraft was designed to work in tandem with space telescopes like the James Webb Space Telescope, which did not use it, or a new 4-meter telescope."

https://en.wikipedia.org/wiki/New_Worlds_Mission


I wonder if a cluster of stars can be treated as a MIMO scattering channel for more distant unknown objects? If so, it should be possible to resolve details in the unknown object. I guess the geometry would have to be such that there is appreciable signal from each star/scatterer (ie. sort of colinear), but there would be no requirement to be on a focal line?


One of the first James Webb Space Telescope pictures is a picture of a massive galaxy cluster that causes visible gravitational lensing, which produces distorted images from highly redshifted galaxies behind it. Seems our galaxy cluster is located in its focal point.


You should also be able to use the fact that empty space is (very nearly) black to constrain any optimization algorithm.


Neat idea!


I guess one problem could be that the scatterers/stars are luminous, so their own emissions might overpower the scattered signal? Maybe a cluster of darker objects? Or maybe luminous objects are okay if the scattered waves are coming from dark regions surrounding the scatterers and have enough angular separation that they can be resolved from the scatterer's own emissions?


hmm.. right.. if the angle of deflection is low and the star is close enough that its light and deflected light show up very close together. My intuition is this is not the case... remember Eddington's test of relativity was for deflection of starlight around our Sun. We're really close, yet it was observable with the moon obscuring the main sunlight.

the article[1] says "For light grazing the surface of the sun, the approximate angular deflection is roughly 1.75 arcseconds." So, what, we take the arcsin of 1.75 arcseconds to get the apparent divergence ratio, and multiply that by distance to stars? As long as that value is larger than the aperture of your camera, then you don't get competing light? Or maybe you'd need something like the TESS satellite, where you have a screen specially created to only allow certain beam transits into your detector.

I've worked with a nearest 10k stars database (https://celestiary.github.io/, zoom way out) and the edge of that is about 2k light years away. So very roughly, let's say there's 1/8th of those in a certain direction... so you get.. what? some 2k sample points towards some distant object? But really most of them wouldn't deflect that object's light towards Earth, but usually over or undershoot.

Don't really know how to put these together quickly, but is giving me some good food for thought!

[1]https://en.wikipedia.org/wiki/Eddington_experiment


Also thinking about channel sounding. MIMO usually has a method of measuring the channel response with known data (or a known property of the data, such as a modulation type), then the channel is either assumed to be stationary whist the unknown data is sent, or a model is used to extrapolate the channel response.

I wonder if the star around which the exoplanet orbits can be used to sound the channel? The light from the star would contain information in the form of its spectra. Maybe this can be used to get the channel response? Perhaps the spectra can be treated as a form of modulation?


Interesting. I'll have to check that out.

Maybe related, depending on how close the light from the far target is to tangent near the lensing star, there is also an atmosphere around the star that is emissive. I was thinking that's mostly noise, but maybe it's accelerated enough to shift its spectrum and serve some purpose in measuring the lensing strength? But either way, would need to characterize it enough to remove it from signal.


Heya, I'm really inspired by the idea. I propose we work it up into a paper, On the Existence (or not) of a Multiple Gravitational Lens Telescope.

I've created a project under Celestiary since I think we can use the code there to do the search on the Celestia star database and also do simulations.

https://github.com/celestiary/mglt

I hope that's interesting and that we can work together!

Cheers, Pablo


I think I've read about using a swarm of these probes "at the focal region of the solar gravitational lens" (i.e. in spherical shell starting 548 AU from the Sun) in A Sci-Fi novel.

I think this one: https://www.goodreads.com/book/show/13039884-existence


This would also work for radio waves, right? We would be able to listen in on radio broadcasts from a distant planet.


Yes, it's possible but the refractive index decreases with increase in wavelength to the point that this would not be feasible.

Radio wavelengths are ~9 orders of magnitude larger than optical, meaning the detector would need to be placed roughly 30 light years from earth. A bit out of our reach.

(assuming I did my back of the envelope math right)


Uh, never realized that gravitational lensing was stronger for higher frequencies ... is that a fact? Any reference?

[EDIT]: the formula on page 4 of the book below does not seem to have a frequency parameter ...

Also, they seem to consider that objects worth looking at are at infinity, which is why there is a well-defined focal distance from the sun.

What seems to involve frequency is that the gain of the lens varies (same book, page 9).

http://erewhon.superkuh.com/library/Space/Spacecraft/Deep%20...


Oops my bad, thought this was a comment under another thread discussing terrestrial lensing which uses Earth's atmosphere, not gravitational lensing. Disregard!


But we're using a lens that bends spacetime. I'm not sure index of refraction is relevant. On the recent image from JWST, I don't see any refraction artifacts on the lensed images of galaxies. https://www.nasa.gov/image-feature/goddard/2022/nasa-s-webb-...


Oops my bad, thought this was a comment under another thread discussing terrestrial lensing which uses Earth's atmosphere, not gravitational lensing. Disregard!


I wonder whether any stars 30 light years away have interesting objects behind them.


Before I read this... "focal region" sounds nonsensical to me. Surely the focal length and direction depends on the thing you want to look at? The "focal region" would be something like a sphere starting some distance from the sun and extending out in all directions to infinity?


> Surely the focal length and direction depends on the thing you want to look at?

Yes.

And "aiming" your observatory involves moving it on that sphere. Given the distances involved that is pretty much either impossible or time prohibitive.

The region starts at 548 AU from the Sun. So 548 times the average Sun-Earth distance.

In an ideal world you would teleport your camera to this location instantaneously, take a picture and then teleport to the next location to look at something else.

We don't know how to do that. The distances are immense.

So instead we pick a target, and send out a satellite or satellites on the opposite vector from it to take a peek. There is a single point where the target will be in perfect focus, but in practice (as the paper shows) the target is "in-focus" enough in a larger region that your satellites can take a picture while they fly through the region around the ideal point.


If all this effort is expended only to look at a single object, the whole idea seems kind of pointless.

However, we are using the mental model of a camera lens to reach that conclusion, and I'm not certain it applies:

    1) even if we use the mental model of a camera lens, given the distances involved, you can probably consider most of the interesting the targets to be "at infinity"

    2) I'm not certain that a gravitational lens works like a camera lens.


It behaves differently of course, and the pictures would be massively distorted. But with some image processing it should be possible to do some analysis. One of the recent JWST images shows an example where a massive galaxy cluster makes redshifted galaxies behind visible.


Given the scales we're talking about I presume that everything you want to look at is effectively "at infinity", as such I expect that you consider where two parallel rays would meet due to solar lensing and that's your focal region.


I don't believe so, I believe that the size and gravity of the sun have a gravitational effect on space an result in a focusing of light at 500+AU distance from the sun. Hence the focal region is around this sphere and is where you would "sit" to use the sun as a lens and get information about the universe on the opposite side of the sun.

https://en.wikipedia.org/wiki/Solar_gravitational_lens


I think that's why it's called a focal region instead of a focal point.


It's an interesting idea, and why not write a paper about it and try to get it published? Every academic needs papers with their name on them, and thinking about this had to be fun, if you don't have anything useful to do with your time.

But as an actual scientific investment, in my opinion, it belongs pretty near the bottom of pile of things we should spend our astronomy / cosomology / astrophysics budget on. The cost per unit of new information is just way too large, and the risk of mission failure too high, to justify making it a priority.


Are you a physicist? This sounds like the opinion of a non-physicist. In physics we usually work on and publish basically any idea possible, just so that we have a full picture of what actually is possible and what the challenges are. Of course it's better if it's practical but that's not a necessity


Trained in physics, computer scientist and engineer by profession, now retired.

My opinion wasn't about the feasibility of this, but rather that of a tax-paying citizen who expects to get value for money spent. I don't see it in this, and I don't really expect the authors imagined they would be describing something that was likely to get any funding attention - the project is too big, and too far out there from an engineering perspective, given the little we'd get from it. So, yeah, I think their writing a long feasibility study is just published paper padding.


But a feasibility study is important scientific work even if it is unlikely to be produced. It could lead to much more realistic or cheaper designs for example


It's not a new idea and there already have been dozens of papers, and entire books (http://erewhon.superkuh.com/library/Space/Spacecraft/Deep%20...), published about using the solar gravitational focus as a lens and sending a spacecraft there. In fact NASA was already funding early mission studies like the Heliopause Electrostatic Rapid Transit System (HERTS) a decade ago.

And not only is it good for planetary sciences, it's good for cosmology too since it enables looking at the truly small scale structure of the cosmic microwave background. A mission to the gravitational focal line opposite some star should be one of the highest priorities.

http://erewhon.superkuh.com/library/Space/Spacecraft/Diffrac...

http://erewhon.superkuh.com/library/Space/Spacecraft/Direct%...

http://erewhon.superkuh.com/library/Space/Spacecraft/Image%2...

http://erewhon.superkuh.com/library/Space/Spacecraft/Mission...

http://erewhon.superkuh.com/library/Space/Spacecraft/Photome...

http://erewhon.superkuh.com/library/Space/Spacecraft/Resolve...


I'm an engineer (current job is designing and building small spacecraft), so forgive me for raining on physicists' dreams, but a huge reason such a telescope is not currently practical is the enormous amount of propellant required to get you there and then station keep to stay focused on a target. The latter is usually glossed over in this concept but it alone blows mission cost out of the water. Would make James Webb look like a cute little project.

If you don't have an active propulsion system for station keeping then you cannot pick targets. Who wants that? Nobody.

This concept is 50 years off at minimum.


That's why the NASA study (HERTS) suggested using an electrostatic solar sail. It's only consumables are the noble gas and/or metal plating of the hollow cathode electron emitters used to keep the craft positively charged.

Actual station keeping +- a few meters once out on the gravitational focal line opposite the target system (which itself is moving) would require some cold gas thrusters that would limit the duration of the mission. But the forces and accelerations required are very small when you're 600 AU out from the sun and even the photon pressure isn't throwing things off anymore.

Yes, getting out there with enough mass for station keeping would be hard and it'd require new methods. Either electrostatic solar sails, or a more traditional H-reversal oberth burn close to the sun.


Claudio Maccone's proposal, to put a radio telescope at the solar focal point for SETI or communications purposes, is included as a chapter titled "Radio Links enabled by Gravitational lenses of the Sun and Stars" in the book Communications with Extraterrestrial Intelligence edited by Douglas Vakoch [1].

[1]. https://www.seti.org/book/communications-extraterrestrial-in...


Are trajectories possible that bring the satellite into the focal points of more than one expolanet, either of the same or a different target system?


An interview with Dr. Slava Turyshev, who is one of the authors of the paper. They are talking about the project

https://www.youtube.com/watch?v=lqzJewjZUkk


What does the focal region of a 3 dimensional solar gravitational lens look like?


30 years.. I wouldn't be surprised when a bigger and better telescope launched 10 years later arrives there 15 years before this crazy contraption.


Maybe, but only if there's a big leap in propulsion technology, and as far as I know there's not been anything big in that regard in the past 50-odd years. Closest thing is probably reusable rockets to reduce cost.


There's a few things that could be used to complete the mission significantly faster with basically-current tech (e.g. NERVA, or Project Orion), but even for deep-space missions they are unlikely to see use any time soon.


On that timescale my bet would be on very large telescopes built in space nearer us (supplanting the ones built on Earth and launched then unfolded). This wouldn't perfectly substitute for a solar gravitational lens scope, but it could do a hell of a lot.


Isn't that focal "region" actually a heliocentric sphere?

If so, where on that sphere would we place the scope?

Or would it sorta glide on that sphere to be able to look at a specific points in the universe?


Can someone ELI5 for me?


Use the sun's gravity as a giant lens (bends light) so you can see a really long way away.

But the focal point of this lens is about 500 times the distance of the earth to the sun, so difficult to get to.


solar sailing technologies and in-space aggregation of modularized functional units to form mission capable spacecraft

Am I alone in thinking this is somewhat pointless to discuss before the prerequisite technology is developed? It's a bit like "how to keep your sentient sexbot from deciding to murder you". Like if we could do those things in the first place there would be a thousand applications with a better return on investment than this.


> Am I alone in thinking

Probably not. Most people have trouble thinking long term.

> this is somewhat pointless to discuss before the prerequisite technology is developed?

People wouldn't develop said prerequisite technologies if there are no applications for it. This paper shows that if we would have those technologies we could get this neat thing.

> there would be a thousand applications with a better return on investment than this.

Name them.


Well, Diffe Whitfield envisioned an internet highway around 1974 then went on to find out how can we keep secrets when everyone has a computer in his home.


Yeah, have you heard of that Patent clerk in Switzerland who wrote about what changes when we ride on a train close to the speed of light - even though we’re still not capable of building a train that fast, more than a century after. What a waste.


> somewhat pointless to discuss before the prerequisite technology is developed

From the article "The study reveals elements of such a challenging mission, but it is nevertheless found to be feasible with technologies that are either extant or in active development." (emph. mine)

It's pointless to discuss the application of technologies in active development?

> there would be a thousand applications with a better return on investment than this

You veered into a baffling non-sequitur, there. ROI in a unique science mission to image an exoplanet 100 l.y. distant to a resolution of 10s km for potential human habitation? The successful ROI is incalculable.


I'd say it's pretty useful to discuss hypotheticals of all kinds. In this case, generating ideas for uses of a technology that is under development might increase interest and therefore funding, or recruit new people to the cause, and generate new ideas that may be useful for the active development of the technology.


It seems there are a pretty good deal of ideas for ever more sophisticated imaging technologies. That's nice. However it would be more reassuring to come up, at the same time (or in due course), with a similar supply of clever ideas for more sophisticated rocket engines, or, more likely, with a long series of fundamental contributions to our understanding of physics and biology. The ambition of the proposed mission is to reach a focal region ~548-900 AU away in order to image exoplanets which are distant up to 100 light years. I am sorry to have to remind us all about this, but given the extenuating long journey to reach a region that is not any closer than 548 AU, it would be even more "painful" to discover even more distant exoplanets which would remain beyond reach for all practical purposes – As per me discoveries of such level should remind us to reaffirm our commitment to take care of the only planet we can live for the foreseeable future.


Astronomy still provides useful insights regardless of whether we can reach those places or not.

That said, we are indeed researching ever more advanced propulsion technologies!

We've made great strides in electric propulsion, which is far more efficient for long voyages than chemical rockets. This tech is already in wide use today in satellites and probes of all kinds.

We're ramping up research in nuclear rocket propulsion again. There are several branches here: nuclear electric, nuclear thermal and nuclear pulse. Of these, the last one is the least developed since it basically means using nuclear explosions to boost you, but it has the most promise for futuristic spaceship drives.

There's also the possibility of using antimater pulse drives but that's a hairy can of worms. Very hard to produce the fuel in enough quantities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: