A few years ago I saw that someone had taken a photo of their back yard using a single light sensor.
The idea was that at each moment of each day, the sun was in a different spot in the sky, and each day it shifted toward or away from the equator. So they stitched the samples together mathematically based on timestamp and a 3d projection of the earth. You could tell it was a backyard, but it was very gauzy, like that moment when you first wake after a nap.
The single line comes from the trajectory of the spacecraft as it flies through the focus point, reading one pixel at a time. Stopping and rastering is not possible and not close to possible with current technology. Of course, you still want a raster grid, but you have to obtain the grid by sending a swarm of different spacecraft. They each collect a line as they fly through the focus point, and then you can assemble the lines into a grid.
The spacecraft would be taking one pixel, then awhile later another pixel, then another. It could get all kinds of dimensional details at each pixel, with full spectra, polarization, what have you, but just one at a time, unlike your scanner bed, which collects many thousands simultaneously, with as many separate optical sensor elements.
After it has collected the whole series, that would be the one, thin row of pixels sent home.
The next spacecraft over could collect another line and send that, watching a different bit of planet surface. Two thin line samples would not be very informative, but hundreds could be.
It is actually, potentially, a bit better: the planet would be rotating while the spacecraft moves between pixel points, and the spacecraft could continue sampling the entire time, getting at a series of adjacent bits of the planet, until the planet rotates to where the first was sampled, and you can sample the next spot over, looking from an infinitesimally different angle. So, possibly, it is scanning a line across the planet's surface, and then later another line nearby. But each line would cover almost the same line as last time.
Of course what it really would do is just watch the planet continuously for years, and send literally everything back, where the planet's presumed rotation could be puzzled out, and then everything could be stitched together afterward. Data from a single probe might suffice to yield an image.
In effect, the spacecraft moving is sweeping across one axis very, very slowly, and the planet's rotation is sweeping on another, albeit probably not one at 90 degrees, but at some angle to the first. (It would be very unlucky for the two to align, but wholly possible.)
Since all this would happen over days and years, the image would end up with an average of that many years' weather, so nothing like a snapshot. But if cloud cover ever breaks, any continents shores would anyway be sharp.
> The spacecraft would be taking one pixel, then awhile later another pixel, then another.
This is what each pixel in scanners CCD does. In his example, the swarm would be like the row of pixels.
> unlike your scanner bed, which collects many thousands simultaneously
The swarm would.
> watching a different bit of planet surface. Two thin line samples would not be very informative, but hundreds could be.
You would want to do this simultaneously, like in a scanner, with some distance between the sensors, so they could sample different "thin lines" of the planet, which could be stitched together to make one image, just as in a scanner. No need to wait. Regardless, a rigid physical or temporal lock isn't really required here. The concept is the same. You could arrange the swarm as a line, diagonal, grid, whatever.
Simultaneity would be pointless, because the next pixel over, from any given probe, would be from (earth-) weeks later. It would pick up samples from each planetary day in between, too, but each would almost completely overlap the previous day's, stepping out until the probe has gone enough distance to be sampling a separate bit of surface. Some information might be teased out of the overlapping pixels, but they would all be from different days.
It would be bad luck to choose a Venus-analog to map.
I might have some fundamental misunderstanding, because I don't understand how this could be.
> stepping out until the probe has gone enough distance to be sampling a separate bit of surface.
Why not have a probe already at that distance, so it's gathering something of that surface?
My assumption here (knowing very little of gravitational lensing) is that the gravitational lens still has the concept of an "image surface", where a translation in that image surfaces can be mapped to some translation in the projection of the thing being viewed. Are you saying that if I put two probes up, with some appropriate spacing between, they can't collect different stripes of the same surface, at the same time?
Same time for the probes is easy. Same time for the planet is harder.
But if the probes are recording continuously and sending it all back, you can probably identify points that are simultaneous on two tracks, after the fact.
But that gives you just a scattering of points on that day. The next pixel over, for each probe, will be for a different planetary day. Your image, stitched together from all the lines returned by all the probes, is an image smeared over at least as many days as pixels in each line.
Such a line taken of Earth is likely to have nothing in it but ocean and clouds.
If you sent a swarm, each would get its own line of pixels, as many lines as spacecraft.