Hacker News new | past | comments | ask | show | jobs | submit login

There were some comments yesterday on tracking subs, talking about the throughput requirements. It made me think about another bit of orbital infrastructure: in-orbit inference engines on satellites, so processing on vast gobs of data can happen without beaming the reams of data down.

But yeah, seems like interoperable space protocols are starting up. BACN-Mesh works. Maybe not quite so important & I forget the name but I feel like there were some command protocols starting to emerge into semi-standard form as well.




See Phi-Sat 1 and to a lesser extent EO-1 as someone else mentioned. Phi-Sat 2 is going up at some point. I'm on a team of researchers working on this. We looked at flood detection as a case study, but as you say, almost all the demonstrators at the moment focus on reducing unnecessary data transfer. Happy to answer questions about the state of the industry/research.

https://www.nature.com/articles/s41598-021-86650-z

https://europepmc.org/article/ppr/ppr534649

Also some work on unsupervised change detection:

https://www.nature.com/articles/s41598-022-19437-5

But there are undoubtedly lots of other people/companies/militaries exploring. We didn't build the hardware for example, and we weren't the only experiment onboard our flight. The main differentiator is how many have actually deployed in space. It's very easy to show that you can run a model on an accelerator, but getting it in-orbit somehow, testing it on real imagery from new sensors, etc. The challenge is that most people will have to train a model on simulated data, fly a cubesat or another small platform and then re-train their model to adapt to images from orbital images. Aerial imagery models don't always transfer well to satellite images.

My take having been to conferences, and speaking to others in the field, is that there are a lot more researchers interested (as of 2022) and we'll start to see more publicly deployed experiments in the next year or two.


The European Space Agency has already demonstrated on-the-edge processing with Phi-Sat-1. They use AI to detect clouds and only downlink images below a certain cloud cover threshold.

https://en.wikipedia.org/wiki/Phi-Sat-1


And this was previously demonstrated by NASA on EO-1. (2017 paper)

https://ml.jpl.nasa.gov/papers/wagstaff/wagstaff-eo1-17.pdf


> Intel Movidius board with a Myriad II chip (VPU)

So, is this cubesat 99% thermal control radiators or what?


This to me seems the biggest problem with the concept: on Earth heat management is incredibly cheap. In space it's a gigantic problem.


This is one of those ideas that continuously sounds new, but is very much already in the works. I've had run ins with NASA folks, startups, and DoD partners that are pushing on this problem.


The problem with this is the energy required and then the heat you have to dump. I haven't penciled out the numbers but feels like at least a decade or two of model & gpu improvements to get anywhere close.


Can GPUs run reliably in an orbital radiation environment at all? The combination of extreme transistor density and zero error checking seems like the opposite of what you'd want in a space processor.


This is just a vague intuition, but I feel like space hardening typically is done to insure very very high levels of reliability in all conditions. Im not sure how much easier the job might be if you could accept a sizable amount of transient failures. For many of these infernce systems, it feels like some "bad" processing might not really be a problem.

As others have said though, and as your "extreme transistor density" points to, high heat dissipation & energy usage are absolutely very real factors here. Still, Coral, back in 2017, was a half watt 2tflops inference engine, on a non-cutting-edge (at the time) process.


> not sure how much easier the job might be if you could accept a sizable amount of transient failures

Starlink pioneered this. Redundant COTS almost always beats rad hardening, particularly in LEO.


Why particularly in LEO? Benefits of some atmosphere still?


> Why particularly in LEO?

Faster degradation means quicker rejuvenation requirements. If you’re replacing your birds every five years in any case, the net benefit of rad hardening is diminished.


From a radiation standpoint I think the Earth's magnetic field helps more than the atmosphere at that altitude.


> in-orbit inference engines on satellites, so processing on vast gobs of data can happen without beaming the reams of data down

The article’s point is this will soon be obsolete. Processing at the edge is enormously costly.


Will go into more detail on why I believe that in part II but you are spot on.


Do you think it would be feasible to do realtime drone tracking/detection from satellites? What would be the bottleneck?


> in-orbit inference engines on satellites, so processing on vast gobs of data can happen without beaming the reams of data down.

It’s a good idea, and I bet folks like Cloudflare that already operate Edge Compute platforms will have a head-start here.


The bottleneck for orbital edge isn't software, it's hardware that can tack into an optical terminal and tolerate radiation. The edge analogy breaks down because the consumer isn't on the edge, the sensor is, and it's just a problem of doing heavy compute next to the sensor, then sending back a summary data product.

After that, networking is well understood.


IoT sensor aggregation is already a major edge-compute use-case, it's not just for reducing latency for end-users. In this analogy you're just co-locating the edge compute on the same LAN as the sensor. All of the orchestration, runtime, and framework concerns from existing edge compute systems still apply.

Edge compute isn't solved; it's an area under active and aggressive development. The Cloudflare Workers wasm runtime for example. And if you're selling satelite edge compute, what's your control plane? You need a config store etc.

Put differently; if I'm a satelite operator writing first-party software to run on my satelites, sure, there's no framework code to write, and I can just tailor the software to my satelite platform's hardware. However, if I'm selling colocated compute with ultra-low latency to third parties, then I need to build something that (IMO) looks very close, if not identical, to a current-generation edge-compute platform.


Why are we selling colocated processing on a piece of mass-constricted ultra-expensive hardware we have to launch into orbit again? That seems like a solution in search of a problem.


Sibling thread provides some examples: https://news.ycombinator.com/item?id=34210236


Bandwidth to send down results vs raw data? Ability to relay instructions directly from that data to the next satellite without a downlink and uplink?


This makes me imagine a space station like data center in orbit that acts like a public cloud on earth, communicating directly with satellites and renting out processing time. I'm not sure how well economies of scale work when you have to pay for every kg you send up there, but it would be an interesting business model.


Is is cheaper for a satelite to transfer data to a space based datacenter than to a ground based one? If not, what's the benefit of locating the data center in orbit? (Except, of course, that it's awesome.)


Stop imagining, in 2021 a cluster of computers were sent up to test this, and since then several instruments are in the works to make use of similar (but not same) CPU clusters.

As always, finding a system that is affordable and radiation tolerant is the problem.


Do you have a link to this?


Here is one paper by the group working on this.

Full disclaimer I had some code of mine and a paper that I'm not linking.

https://ieeexplore.ieee.org/abstract/document/9884906




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: