Hacker News new | past | comments | ask | show | jobs | submit login

This kind of a setup of a large number of small, cheap detectors works well for observing diffuse objects with low surface brightness. Generally speaking telescopes improve with size because larger telescopes can resolve smaller objects, so you can concentrate the light from your source into a smaller patch and increase the signal-to-noise with respect to the background. But once you have resolved the object (which doesn't require a very large diameter for a diffuse object) you no longer get any benefit from a larger telescope except for the greater light collecting power. So there's no benefit to having a single large mirror vs a large number of smaller detectors. Since it's a lot easier and cheaper to just buy a bunch of off the shelf components rather than build a large mirror from scratch that is what they did here.

A friend of mine in grad school worked on a project that is similar in spirit called ASAS-SN. It also used off the shelf cameras but distributed them around the world so that they could detect supernovae and other transients. Because everything was off the shelf they could build out their network on a shoestring budget. I think they're the first to discover the vast majority of all bright supernovae these days.




- "diffuse objects with low surface brightness"

Are things amateur photographers with small telescopes (and lots of patience) sometimes discover,

https://old.reddit.com/r/space/comments/13uco46/i_discovered... ("I discovered this planetary nebula using a $500 camera lens, now it carries my name")

https://www.astrobin.com/i9yy6f/ (18 hours!)


I was struggling to see the planetary nebula inside that blue circle before I realized that is the planetary nebula. cool!


Somewhere there's a large, bright nebula in the shape of a red arrow no astronomer's ever noticed.



I still don’t understand how objects in space of that angular diameter are still being discovered. I would have to imagine lots of people have seen it, but just never chose to document or catalog it?


This is an extremely long (18 hour) exposure in specialized narrowband spectral filters that have no usefulness for anything other than these particular targets.


Oh nice!!! I looked at a couple sky surveys in different, and it was nowhere to be found in their data, that’s so cool!


So that seems like it would be a great use for the Seeestar s50. There's a bunch of people that bought the Seestar s50. Which is a $500 smart telescope that is controlled with your phone, and can rotate and track objects in the sky. They're now distributed throughout the planet.


It’s interesting that technically an individual camera’s sensor is itself an array of smaller sensors that capture individual pixels. So you have like a tree of arrays

Maybe we can keep stacking them. Build an array of arrays of cameras/telescopes

What would be the limit?


The sky is the limit. Or the target at least.


Can one use millions of smaller detectors if one finds a way to point them in one direction and synchronize them to take pictures at the same exact moment?

I mean, can millions of phone cameras make one giant virtual telescope?


Broadly, yes.

The key term you are looking for is "exposure stacking". See for example https://markus-enzweiler.de/software/starstax/ and https://www.cloudynights.com/topic/719318-stacking-data-from...


Took me a while to understand what you meant. A phone camera already is millions of smaller detectors .... But I think you mean coordinating millions of people to all take photos of the same direction in the sky and then combining all the photos? I'm sure it can be done with an app and a way to build that crowd of users! But the field of view will still be huge because they're not telescopes/telephoto lenses.


Yeah, modern phones use multiple cameras to produce a single image. Would it be possible to produce a higher resolution photo using millions of images taken from millions of locations?

I have no clue what I am talking about, but would love to hear somebody knowledgeable speculate on this.


For laughs I one time combine frames from some really old footage. I upscale the frames so that each pixel becomes a cube of same color pixels. Then I stack them and shift them to line up properly. The resolution goes up and more detail is revealed. Not sure what the limit is of that approach but if there is only one frame that has them you can remove it's grain and correct bends (wobbles? distortions? wrinkles? waves? what is the word?)


> correct bends (wobbles? distortions? wrinkles? waves? what is the word?)

Since you're talking about video footage, I would guess it's rolling shutter distortion you saw. This can result in wobbles, skew, or aliasing artifacts.


Now, maybe. Soon - not really, as AI features in camera apps will erase weak signals and/or replace them with some creative interpretation of a generative model. COTS cameras are increasingly becoming useless for doing science.


Are the AI changes also made on RAW format images?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: