Me: 20 years in the defense space business. Stating my own opinion.
This is basically right. The problem with space imagery is that almost everyone who wants it has a niche use case, and those few organizations without a niche use case (the US Weather Service, various militaries, etc) generally want imagery that's so specialized to their own problem that they have to spec, buy, and operate their own orbital assets.
Take Ukraine as an example. Leaving aside the moral question of whether a satellite imagery company should be profiting off the Ukrainian war, Ukraine appears to be using commercial orbital imagery providers to figure out Russian troop movements. That use case is not one any commercial provider anywhere is going to build an ML model for. But analysts working on behalf of Ukraine can absolutely either use raw pixels or develop their own ML algorithms that run on top of the raw pixels to find Russian tanks.
And almost every other potential user is similar. They're all looking for something different. Oil companies want to pre-screen drilling locations. NGOs want to look at deforestation in Brazil or methane leaks in Saudi Arabia. You could even go all the way down to individuals -- at the right price, individual farms might want to look at relative growth rates of corn in their fields, or soil moisture levels, etc. Or they might want to count heads of cattle or sheep, or... or... or.
The point being, outside of weather, which we already know how to get to end users without having them subscribe to an orbital imagery provider service, every customer is different, and what they want from the pixels is different. It's basically the long-tail problem. In order to be profitable you have to fill an enormous number of niche use cases.
Thanks for the perspective, Glen. I've often heard this referred to as the "long tail" problem for satellite imagery providers. The area under the curve is enormous, but any one algorithm only serves an extremely niche audience.
Another analogy I use a lot: satellite imagery is like salt. The dish can have lots of ingredients (in a military context: HUMINT, OSINT, SIGINT, etc.) And the satellite imagery can make the whole dish. But you never want to consume it in isolation, that would be disgusting.
>> but any one algorithm only serves an extremely niche audience.
Forget the algorithm. They need to fix their customer service, their sales process. Customers want pixels, but they need basic services: simple and a predictable price. I want to give a location (pix+radius, four corners or whatever) and a delivery schedule (once a week etc). But when I try to buy that stuff I get package deals, "ask for a quote", and vague statements about times. That doesn't work. Normal customers, ie not intelligence agencies, don't want a drawn-out negotiation process. The first company that can provide a basic web interface for purchasing imagery quickly and piecemeal will win the market.
Top of the list for small customers are probably high-end real estate agents. They want to monitor their neighborhoods for houses that are under delayed construction or backyards/pools that are being neglected (sure signs of someone ready to sell).
Civil litigation attorneys: I want everything you have about this particular intersection. Cops: I want any images you have of this house between these dates. News agencies: There is a Russian ship on fire at X location. When can you get us an image? And a great many other small customers I cannot think of at the moment.
working in the agribusiness industry. I can tell you that satellite imagery solves many problems. But just satellite images won't help. You need to analyze and calculate various indices and indicators. I can say that the accuracy of analysis is high and it is definitely worth spending money on this software. Here I found some interesting information about high resolution imagery https://eos.com/products/high-resolution-images/
Regarding small customers such as the real estate agents, attorneys, and cops etc. is it actually economically profitable to serve people like them?
I know satellites are getting cheaper but there are still limits to the cost of targeted image acquisition right? Is the price at which these customers are able and willing to pay enough to cover those costs?
For example it'd be a cool feature to check where my food delivery man is at in real time through satellite imaging but that can't possibly be a profitable use case.
The issue is that Google Earth imagery is good enough for most manual processes, Open Source imagery is good enough for a lot of automatic processes and there are relatively few markets that are willing to pay for frequent (1 month+) updates.
I want to give a location (pix+radius, four corners or whatever) and a delivery schedule (once a week etc)
This is the requirement. The timeline could be once a week, once a month or once a .. - the point is not frequency but predictability. With Google Earth, Open Source, et al there is no predictability, that's why they are not answer for THIS potential customer.
I would imagine major US/European cities are pretty well-served in this regard by annual and semi-annual aerial surveys. But that still leaves a lot of smaller cities who aren’t pro-actively covered by Nearmap/Hexagon/etc. They are part of the long, long tail that is criminally underserved today.
> Top of the list for small customers are probably high-end real estate agents… [want to monitor] backyards/pools that are being neglected.
This is basically corporate surveillance and I’m not at all comfortable with it. If you changed ‘real estate agent’ to ‘large landlord company’, the public would find this exceptionally unpalatable.
Obviously if you don’t want your pool surveilled by low orbit satellites, you’re free to either raise your home to a higher altitude, or consider relocating to a more private and subterranean dwelling.
When you go outdoors, people are free to look at you, but that does not mean (as a rule) they can shoot pictures of you without your permission. Same principle can be applied to your property. Like, if a stalker takes a photo of your window hourly or keeps track of your car – that could be treated as infringement of privacy. If a door is unlocked, that does not mean you're allowed to trespass.
> When you go outdoors, people are free to look at you, but that does not mean (as a rule) they can shoot pictures of you without your permission.
That's not generally true in the United States. If you are on public property, you can take a photo of anything or anyone you can see. You do not need their permission.
There may be local laws that restrict this, but they are not the common case.
Anywhere is literally viewable from space. Some radiation types also give information to sats about what happens indoors. So your question is basically:
Do you think you have a reasonable expectation of privacy anywhere ?
Reading it like this, is this really the world you want to promote? Or is privacy being only obtainable in bunkers a few km below the surface an acceptable compromise?
I'm not aware of any technology as of yet that can see activity through a thick and insulated roof from low earth orbit or even a few thousand feet up for that matter. But I could be wrong.
I remember seeing a (dead tree) newspaper article a few years ago about the police using sat images to detect potential cannabis plantages by the heat radiating from the building.
Sorry, can't seem to find it on the net after a quick DDG
Yes, that would make sense because lots of grow lights can cause significant heat. I don't think heat technology could see individual people through a roof though.
SLAs, perhaps - and more broadly, the notion that even reasonable SLAs means $$$$$$, while non-SLA means $$$$ but basically best-effort and costs significantly less.
So for example non-SLA "once a week" might really mean "once a week except if any of these 4 million other things take precedence" and translate to intervals being skipped on a regular basis, photos being days or weeks old, etc. SLA'd once-a-week would really be once a week and not a second late, but truly prohibitively expensive.
I'm reminded of the early US telephone systems in the 60s and 70s around WWII etc where Important People could pick up a phone and always get a connection even if that booted off a few tens or hundreds of calls on a shared trunk in the call path. Satellite is like an order-of magnitude-worse version of that problem because of the associated eye-watering costs and subsequent plain old near-zero availability.
That’s a good point: often the data product isn’t generated solely (or even mainly) from orbital imagery. Often it’s provided mainly from data the end user already has, and for which orbital imagery serves either as a cueing system (providing candidate locations which the end user will verify) or a verification system (providing final verification of locations the end user has already cued).
Certainly that’s true with eg the petroleum industry, and big ag.
> The problem with space imagery is that almost everyone who wants it has a niche use case
But this is a good problem. It's like saying "the problem with motors is that everyone who uses them has a niche use case" (submarines, cars, airplanes, industrial machinery...). Or that "the problem with microscopy is that everyone who wants it has a niche use case". And indeed it does! Microscopes for biologists are different to those for chemists, engineers, medical doctors, physicists, etc.
The concept of "space imagery" is extremely wide. It is natural that earth observation satellites become specialized. I wouldn't be surprised to see in the near future some "CH4" or "CO2" satellites that acquire light in a handful of extremely narrow particular bands on the short wave infrared spectrum that are useful only for observing plumes of these gases. Right now, people use hyperspectral imagers (which have a dense sampling of some parts of the spectrum) and throw away most of the image data.
Absolutely. And what you do in that case is to sell motors to people who want them. If they give you feedback on how to make better motors, take it and make better motors.
What you don't do is presume that you know everything about how people want to use motors, and offer a subscription to a design service that does all of their engineering design for them, as a way to sell your motors.
It's only a good problem when your niche customers have deep pockets to pay for their varied needs and high risk tolerance. Most variations can't be protyped out by some engineer on the weekends where they just spin out a startup with minimal risk to address the gap.
The false assumption I deal with that drives me insane with so many individuals is that everyone thinks their specialized use case is a small variation on the major use case that already benefits from economies of scale. That often isn't the case and a significant R&D effort needs to go underway on just how one can leverage existing technology for their use case.
There's very often at least one, if not many, mission critical functional requirements from existing tech that require significant effort to make the jump from the existing tech to the desired use case. And guess what, all the non-niche users don't want to pay for that, so you need to be prepared to pony up the capital, accept risk of failure, and be ready to take the plunge.
I tell people with this mentality that they need to work in reverse, first understand the technology they think is close and find the problem sets that have the best match up and focus on those. These efforts can costs hundreds of thousands very easily if not millions to tens of millions if you just play it by ear that "...this thing is sorta like what we want so it can't possibly be that difficult to adapt." (Basically what I hear with technology management, many business people, and clients)
Many people pretend software and tech are just Lego blocks and since it's virtual, there's no capital needed. Good luck, because the skills needed to deal with this tech isn't cheap and the complexity often isn't low meaning expensive and difficult to find labor for long periods of time, often with a fairly good chance of failure.
>> find the problem sets that have the best match up and focus on those.
Or, abandon all of that. A space imagery company is a media company. They generate content. So do what other content creators do. Take everything you have and shovel it in front of people as fast as possible. Deliver it by any and every means available. Ask questions, but do so AFTER they have already made a purchase. Then judge what your customers want by what they choose to consume.
Is this even feasible? I imagine high resolution satellite imagery is a lot larger than your typical blog post or youtube video. Plus the surface of the earth is huge, if you push out everything 95% of it is probably irrelevant crap that doesn't tell you anything about your customers. But I don't run a media or satellite imagery company so this is just what i imagine lol
Why would that matter? Consumers get to choose which subset of the data they use. If they want to download the entire Earth at 5PB, just charge them appropriately.
My impression is that it matters because if the data hasn't been collected with a specific purpose in mind, then its probably not gonna be useful for a lot of cases.
Say I'm a farmer and for examples sake I want regular aerial images to estimate crop yield for a plot of land. That would require high resolution imaging of a particular patch everyday for maybe a few months. That would require purposeful data capturing at regular time intervals during overhead satellite passes.
From what i understand satellite imaging companies won't happen to just have those images containing the region of interest you want just lying around on their hard-drives which they can pull out with some query. If they make all the images they have available to the public, I'm guessing most of it will be useless to people considering it needs to cover the desired region of interest across time and space.
Well, this seems to cry for a big coalition. build it and they'll come. Iridium failed because it was too early (too costly, only a few specialized users had the cash for it), Starlink seems to thrive because they can go general.
> like saying "the problem with motors is that everyone who uses them has a niche use case" (submarines, cars, airplanes, industrial machinery...)
This is a good analogy. How many companies say “I’ll build a sweet motor and then find a customer for it”? They don’t. They build the motor for the application. (More often, they build something close to the final product.)
That's only true in extremis: it's not even completely true of car/industrial motors! Companies like Cummins design motors that are used in all sorts of applications, e.g. https://www.cummins.com/engines/qst30
When you look at smaller motors (e.g. DC motors in handheld consumer products), they're basically jellybean parts targeted towards the highly specific use case of making a shaft turn.
Wrong. They absolutely do. In fact I’d wager the vast majority of motors are essentially commodity goods. Even if you were thinking of just combustion engines, its really not the case, most engines get reused many times. Which makes sense, its an outsized engineering problem to optimize so you’d want to maximize your return.
Usually corporate culture reasons prevent a business like that from just taking your money. They don't want to undercut their giant custom contracts and don't know how to handle customer support for cheaper people.
And I can imagine the cheaper customers are a lot less educated and don't have the right expectations, like how often their target area is covered in clouds.
Drones exist, and cost a lot less then thousands to buy. Plus a drone can be used when you want to, the satellite goes overhead every few weeks and if it is a cloudy day too bad. For farms the information from images is only valuable on a few specifc days, too early and you don't learn anything, a few days later and it is too late to do anything about it.
I work for John Deere. I can't speak for the company. I'll confirm we are intersted in imagining, but I won't comment on our plans to deal with the above issues.
Some providers could give you daily updates but yes, drones are much cheaper and much more flexible. You need a custom sensor? Drones can do it, on satellites don't even think about it unless your budget is >$100M.
There are like 2 million farms in USA, and if that information is worth to them something like $10/month then that's a potential market worth quarter of a billion per year..
I've not worked in the industry per se but I'm quite passionate about maps and online maps generally speaking, and I'm curious to find out if the good old way of identifying stuff on maps directly through human interaction (so, not using any ML-stuff) shouldn't be given a second (or a third) thought. Of course, all backed by better tools for said humans, so that they can annotate/label/identify the data better.
I've realised that that might still be a viable option once I started to map the forrest cover in my geographic area based on 150-years-old maps (these excellent maps [1] drawn by the Austrians in the 1860s). Some JS-code for the front-end that helped me "draw out" the forests, then send that data to a Django/Python backend and call it a day. It took me a while to draw those forests out but the job was very easily scalable in terms of human-power, i.e. 10 more me would have probably finished it 10 times faster with a little coordination (for anyone curious the data is in here [2], I'd say it was almost 90% done when I passed on to my next project, as I often do). I was first looking at using some heuristics in order to "automate" some part of the job, but both the false-positives and the false-negatives were just too high, the outputs would have been horrible in terms of accuracy.
I see a similar thing in the consumer data industry. Everyone has their own niche case that they want to find (diaper buyers with 2-2.5 year olds, or Etsy merchants, or flavored whisky early adopters, or new movers in houses 20-40 years old, or whatever) and you end up building umpteen custom models with the raw data that instantly get outdated the moment you use them.
I think it’s very, very unlikely. It would risk giving away information on the capabilities of US national security assets. Which is classified literally as highly as it is possible to classify. We don’t even let the Five Eyes know that stuff, much less hand it to non-aligned militaries who we can’t vet.
Plus, it doesn’t appear to be needed. The Russian military is proving to be god-awful at even basic field ops like camouflaging their vehicles.
> It would risk giving away information on the capabilities of US national security assets.
But the actual satellites can be seen with the naked eye, and their orbits are known. The resolution is a linear function of their height, so it can be easily inferred, or at least bounded, by that of a "hubble" at a much lower height. If they are really worried about this scalar piece of data, they can easily blur the images before transferring them to ukraine. Not that it makes a lot of difference to see a column of tanks at 30cm or at 15cm pixels.
"The resolution is a linear function of their height..."
Incorrect. Even in the consumer camera space, resolution is a function of distance, native sensor resolution, lens magnification, lens quality, shutter speed (because the target is moving), stability of the tripod, etc. Same thing is true of orbital imagery: your effective resolution is a function of your optics, your sensor, the ability of your attitude control system to hold a steady pointing vector, etc.
Also, "capability" != "resolution". What frequency bands is that satellite imaging in? Visible, SWIR, LWIR, ultraviolet? What is the effective magnification of its optics? Is it an optical system at all, or is it an RF bird? Is it all of the above? Does it just take top down snapshots, or can it track moving targets? If the latter how fast can it track? Fast enough to keep up with a tank, or fast enough to keep up with a fighter plane? How many frames per pass can it take? How fast can it slew to get multiple objects in the same pass? Etc.
> It would risk giving away information on the capabilities of US national security assets.
Trump already did this when that Iranian something-or-another blew up (missile silo? nuclear reactor?) a few years ago, and he provided classified satellite images to the pubic
"gave away" old tech that just confirmed what everyone assumed. Your article even states it's not our best and most of the security agencies cared more about the process of how he did it then the image itself.
I expect the US us giving the analysis not the images. Ukraine generally shouldn't be looking at raw images, they need data to inform how and where to attack, but the us has experts to provide that thus allowing more people to go to the front.
I'd guess the US military would not share their own sat feeds for fear of revealing their capabilities, they certainly could share techniques to exploit commercial sats in an effective manner for military purposes.
> Ukraine appears to be using commercial orbital imagery providers to figure out Russian troop movements.
Is this really true? I guess I can believe it, but it seems strange that the United States is sending over $800 million in weaponry, but won't send over satellite imagery.
I believe USA is sending over intelligence reports from that satellite imagery, however, the raw images may easily be considered too sensitive to release to others because they reveal both the exact capabilities and potential countermeasures. I recall the (IMHO justified) uproar back when Trump tweeted a photo of a single recon satellite image of Iran, they definitely would not just send over all the raw data.
> That use case is not one any commercial provider anywhere is going to build an ML model for.
Why not? It's been a while since the west has had a war where the moral details are so clear. If I had a company like this I'd be jumping on the opportunity.
Beforehand, I mean. There are market opportunities out there, but the problem is they're impossible to see coming. Nobody thought Russia would invade Ukraine, until it did. Nobody thought Ukraine would survive for a week after Russia invaded, until it did. So there was absolutely no possibility anyone would have seen a potential market for tracking Russian tanks in Ukraine and selling that data to the Ukrainian military.
One company trying to address that is Up42, a subsidiary (hope that's the appropriate term) of Airbus, building some marketplace for SAT imaging analysis.
Interested to hear Mr. glen's opinion about that approach.
Up42 only exist due to discounting pricing below commercial rates with some loss-leader pre-purchase discounting model.
History has shown that when you sell something for less then you pay for it; hoping that users don’t fully use their credits to cover that loss; that time is not on your side.
But then it almost sounds extremely simple: provide as much raw data as you can (spectrum, resolution, surface coverage) and the users will come. Of course, this is outrageously expensive to do unguided.
> Allow me to put it more succinctly: selling derived data as a subscription product does. not. work. I don’t care what it is. The juice is never worth the squeeze.¹ Count cars. Count airplanes. Count ships. Segment land cover. Track oil inventories. Estimate biofuels. Measure water levels. Etc. Etc. Etc.
I see that he outlined a few exceptions at the footnote… I'll also add Plaid. I think this guy is making some huge generalizations that don't hold up beyond his industry.
But that said, I recently downgraded two potential startups based on data feeds to personal projects because I realized that nobody really pays for data, and if there is any value then the providers will figure that out inevitably…
Another recent example: I've been using Deliveries for Mac and iOS for over fifteen years, a very simple, perfectly-designed, laser focused app. Both Amazon (not surprising) and Fed Ex (quite surprising) have decided that freely providing delivery dates to consumers is too valuable to leave to third parties, so the beloved app is shuttering sometime this year.
https://junecloud.com/journal/iphone/the-future-of-deliverie...
Fed Ex (quite surprising) have decided that freely providing delivery dates to consumers is too value to leave to third parties...
...or to leave to the shipping endpoint customer, either. I can't tell you how many clicks it takes to determine when a particular shipment will arrive at my door. Off the top of my head:
0. Email from vendor: "your package has shipped!"
1. Log in to the Fedex account.
2. One would think that post-login that it would take your straight away to the "Manage Your Deliveries" page, because what is the most common action taken by a residential customer post-login? (My guess is, they want to find out when their stuff is going t show up.) But alas, no. It just takes you to the main page, but now you're logged in.
3. Search for that deliveries page...what is it called? Oh, wait, here's a Track button. Nope, that's not the one you want. Go click some more.
4. Finally find the Manage Your Deliveries option in some buried menu. Click it. It won't take your directly to the shipment that you originally were looking for, but it's in the neighborhood.
5. Ah, the Manage Your Deliveries page, where I can find out when the package will arrive.
6. "A label has been created, but the shipment hasn't been dropped off yet, so we haven't the first fucking clue when your package will arrive. But be sure to come back tomorrow to do this whole exercise again!"
I'm almost to the point of preferring vendors that use DHL instead of FedEx. It's that bad.
You don't need to log in or even create an account at FedEx just to track deliveries. You just need the tracking number. It's the first thing you see when you go to the homepage of fedex.com
You do need to login to see information on individual box delivery statuses.
Otherwise, you just get the "master tracking number" delivery estimate which seems to be based off of the first box delivery date, not the date all the boxes will have arrived.
There's probably a good reason things are that way for delivery tracking (see chesterson's fence). But the rabbit hole doesn't end there! Reports generated from their own customer portal don't include per-box delivery dates. They all show the delivery date of the FIRST box. This is extremely frustrating when trying to make accurate models for forecasting, or even just lead time estimation.
They do this to make it harder to compare their services to competitors like DHL (oodles better than FedEX - but there are risk management considerations and also capacity issues) and it ends up harming businesses trying to serve their own customers better. This is especially annoying since they have really granular data internally that go into even more detail than just delivery date on a per box level.
Attached is a screenshot of a spreadsheet fed by some internal SQL database and macros that can spit out per box information including delay reason (weather, transit, act of God) and even number of hours late.
If anyone at FedEX is reading this please consider pushing the narrative that your business customers aren't the end of the line for the goods you move.
If anyone who DOESN'T work at FedEX is reading this, I can email you a redacted copy of the spreadsheet I referenced above (just send it to tepitoperrito AT 420blaze DOT it).
This is not news to me. But of the four or so commenters saying basically the same thing (you don't need to log in), it does not seem odd that the user experience sucks less if one does NOT establish at the beginning that one has an account for such things? What the hell is that whole "Manage Delivers" page for, then?
But telling me I'm doing it wrong for using an advertised feature, yeah, that's less than useful.
Similarly, if you are not logged into Google, but have a cookie, and try to view a public user-shared document, login wall. No cookie? Document just works.
Sounds like you’re trying to use a delivery portal but didn’t finish setting it up.
I use the UPS version as I get alot of UPS packages. They typically email me once the shipment is picked up, typically before the sender does. They have a dashboard where you can see all
inbound shipments as well. This requires registration and address validation, but not a UPS “account”.
I've no skin in the game but having dealt with tracking a single fedex today, i found it to be a remarkably ok experience.
No login, though.
Their 'notifications by email' section was pretty messed up.
How many people receiving FedEx packages from online purchases actually have an account with FedEx to log into? In the words of Steve Jobs, "You're holding it wrong".
I agree-most of my observations here do not apply broadly beyond the satellite imagery. One of my favorite resources for learning how "data as a service" businesses work (when they work) is the "World of DaaS" podcast that Auren Hoffman hosts: https://www.safegraph.com/podcasts
I agree - some more exceptions: Hedge funds and VCs as well as brokers and other investors definitely _do_ pay for insights on top of public data feeds. TipRanks (worked there 2012-2017) is a big'ish (profitable) company doing (mostly) exactly that.
I think they were referring to their own startup ideas. (I was also momentarily confused because "downgrade" tends to be used in the context of investments.)
lol. Was I using investment lingo? How humiliating.
I managed to put together an MVP of a service that would push notifications for stuff like "tell me when my favorite musicians/novelists/artists have new consumable," all the while thinking "if any of these people were smart they would have done this a decade ago." Of course Amazon started pushing out the book stuff, and Apple Music with music, just a few months after I got the back-end APIs working.
When I was an agronomist I used satellite imagery a lot for monitoring farm crops for our customers. Started in the early eighties going up in a Cessna laying on my belly where the passenger seat was sticking a 35mm camera out a repair port.
But that got old and started researching satellite options. Problems was our company entire sales area was three West Michigan counties. All the companies wanted to sell me, I forget the term but say a tract (scene?), that was half the state.
I didn't have a problem with the price per acre but wanted to buy by the section or 640 acres. It didn't make a lot of sense to buy the township but they wouldn't even sell me by the county. At the time probably 5% of my customers wanted this service and only wanted to buy just part of their acreage.
I asked all these vendors why they couldn't sell me by the township. They routinely answered they'd lose money because they couldn't sell the rest of the tract. I said now you understand my problem, at a certain point you've got to break up a tract or give up on the market. Most of the company's ended up leaving ag.
I eventually found a company that would sell by the section at a higher but still manageable price. Course then I discovered that in four flyovers a season the majority had clouds over the field and you couldn't see anything ;<(. Such is the pain of being an early user in any field.
FYI The military has satellites using radar to see through the clouds, but in thirty years that tech hasn't made it to the commercial operators.
The answer I think is automated drones that fly a circuit. The tech has been available for a few years but the government won't approve it for that use. In fact the FAA won't even let a Detroit company demonstrate they could successfully evade objects, they ran out of funding and shut their doors ;<(.
Things have changed a bit at least, but it's still very provider dependant.
Planet at least will let you pay per pixel for their PlanetScope data (3m res) and with a daily revisit time you are guaranteed at least a few hits per month in the growing season.
As someone from the industry, all I really want, is a Netflix of datasets. Planet.com, Capella, Maxar, ICEYE, AIRBUS etc.. I hate the guts of their B2B business models. I want an aggregator, I want basemaps that are refreshed on a best-effort basis (not by me buying km^2 of observations), I want standardized formats and metadata across vendors.
Give me that, with some obscene, but transparent pricing structure, that lets me explore for discretionary funds and exploit for less money than triggering tenders and investment decisions. Then, then you have what will move the entire commercial earth observation industry out of the CAPEX-fryer it’s in today.
For anything less, there’s stiff competition from ESA’s free Sentinel satellites.
The underlying reason for all of this is price discrimination - they are worried that simple and transparent pricing will lose them margin because their highest value/margin customers will have comparative pricing to help them negotiate.
On the flipside, if you put forward transparent pricing but it’s too high, it will stop prospective customers coming.
I think UP42 is trying this model. They have a catalogue where you can download archived data or task it. But yeah, right now their sources are a bit limited, and its' an open question whether or not they'll be able to sign up enough providers to make it work.
And they might be falling into the trap that the original article points out by trying to offer you processed data instead of being laser focused on having the best data sources and API.
The surface of the earth is huge and given the high-resolution close-to-realtime imagery that most people want, its not really possible to provide standardized sets of images that just happen to cover the region-of-interest for everyone right? Hence the need to targeted imaging.
Yeah, this ultimately probably winds up being owned by one or more cloud providers. It’s also the most common business model I see startups trying to do. It usually doesn’t work because you can’t put lipstick (flat pricing, self-service sales motion, open licensing) on a pig (janky and secretly manual integrations with misaligned data providers who view your success as a threat to their most lucrative contracts). More here: “Stop Building Satellite Imagery Platforms and Marketplaces” https://joemorrison.medium.com/stop-building-satellite-image...
This is a great example of the end-to-end argument. Putting smarts in the middle doesn't end up working because you lose the ability to optimize for the application semantics at the ends. It's an "argument" so not guaranteed to be true in all cases but applies here (according to the writer - I know nothing about the specifics of this technology/industry). But interesting to see the pattern recreated.
>In my opinion, every supervised machine learning model is hopelessly biased by the intent of its creator(s). Namely, it inherits the bias of its training dataset (both geographic and semantic).
Profound. And True. Sometimes I wonder whether we can truly call them learning models at all.
Author here - not an original insight, although it's clichéd enough that I can't point you to where I picked it up from.
I also want to emphasize that I do not view bias as a bad thing in the context of supervised models. In some ways, I think it's the whole point of a supervised model (to inherit the judgment of its creators). If the bias helps filter predictions that are useful for your goals, it's a good thing.
Bias is learned behavior, so it seems that "learning models" is precisely the right name for it despite whether we considered the ramifications of learning
1. People don't want satellite data they want their problem to be solved
So a company like Planet shouldn't (just) get satellite data they should solve problems.
2. Companies can't do 2 things at once well
So Planet actually has to choose between solving problems or just getting satellite data
3. Companies like Arturo have good focus and solve the problem of climate risk for insurance
So Arturo should stay focused on that, but where do they get the data from? Planet right? So Planet does have a set of customers for its data?
--
Edit: I re-read the article and maybe I'm just confused on the wording. Is he saying that raw data is valuable and worthwhile for satellite companies to sell but they should not do anything to the data before try to sell it?
That is, indeed, exactly what I am arguing. Either:
* Just sell data or;
* Just sell applications powered by your proprietary source of data
Do not:
* Sell derived/refined data as a half-measure
* Sell both applications and wholesale data at the same time
If you feel like that doesn't make sense, you are not alone. Almost everyone in the industry disagrees with my views on this topic based on how they run their businesses. Satellogic is one notable exception. But I can't think of a single other provider that would agree with me.
I'm in complete agreement with this point of view, with about 22 years experience in the remote sensing space, ~10 of that in private industry.
And its not just Planet that is hung-up on this same failed business model. Its also Hexagon, and a myriad of other earth-observation providers. Some are so difficult to work with its literally cheaper to go buy an airplane and a wide format camera and roll your own.
I would love to know more about the possibilities of Remote Sensing in the private industry. I‘m working in academia and have the feeling that I’m in an echo chamber all day every day and no insights at all. Any chance I can contact you or can you point me into a direction?
Also, full disclosure, I work for a satellite imagery provider (https://umbra.space/) where we're executing on the "just sell data" strategy, so I'm fully corrupted as far as seeing the Truth when my own ego and self interest is inextricably wrapped up in this debate.
Bloomberg's main product offering is still the data feed though. The "application" part of it definitely makes it easier to use but a lot of trading desks build their own algorithms and pricers on top of that data for the functionality they really need. The built in stuff bloomberg has tends to be overly simplified
This has not been my experience. My view of Bloomberg is two primary products: (1) Bloomberg terminal and (2) Bloomberg BPIPE.
(1) is a desktop (and now mobile) application that trader, sales, etc. use to communicate, view raw data, and view derived data in "mini-apps" (price this bond under scenario X, etc.) The base cost is about 25K USD per year (there are local IT costs above the terminal license fees). There is a raw data access API for Excel, but very limited. They block your API access if you try to download too much data. The days of pricing any kind of volume of products from a Excel Bloomberg API is mostly gone. Also, you cannot run it on a server and just screen scrape / use data API all day long.
(2) is a raw data feed service, akin to Reuters. Not well understood by industry outsiders: There is no "all you can eat data service" anywhere in finance. Period. Every primary data source is now carefully guarded with license costs and special rules (no redist, etc.). Some stock exchanges make more money selling data licenses than collection trading fees! I have no idea about BPIPE prices, but I assume expensive and per data source with nearly infinite granularity.
If you believe it: 20K USD per terminal and ~325,000 terminals as of 2016. That is 6.5 billion USD per year. Nice.
About revenue: (2) No idea. I cannot find any reliable published stats.
Lastly: I regularly see people argue on HN about an open source version of Bloomberg. It is impossible for two reasons: (a) Network effect in the communication channels -- lots of people mostly use Bloomberg chat / email and hardly use data, besides the trivial. Most people on the buy side (money management, hedge funds, portfolio managers) join Bloomberg to talk to other people on Bloomberg. And: (b) data licensing.
"Bloomberg for x.." strikes me as just the modern corporate version of "Uber for x..".
I've heard so many founders try to describe their business as "Bloomberg for.." when trying to describe a mixed focus offering of products that I immediately hear alarm bells nowadays.
Where does a system like https://picterra.ch/ lie? They seem to provide a way to tune algorithms for your images. Are those "algorithms nobody want", or is that helping niche use-cases (by allowing customisation)?
Disclaimer: never actually used their system, just saw presentations about it, which show how one can train an algorithm to count... stuff.
Hey, I'm the lead engineer at Picterra. Would be happy to answer any questions.
What we are trying to do is indeed allow customers to build algorithms tailored to their use cases. We often joke that the reason our ML works is because we are overfitting to customer datasets/use cases. But I think that's somewhat true in the sense that we are not going for "worldwide trees counting" but more for "reliably count cars on these 50 parcels".
It's also interesting to note that a sizeable chunk of our revenue is coming from companies with drone data where it makes even more sense to be specific to customer data (resolution, time of day, geo, etc...).
Terrific company and interesting technology. I see them as an application making it possible for non-technical people to automate basic workflows. I’m skeptical that’s a big audience but low code/no code solutions in other industries have enjoyed surprising success, so I look forward to being pleasantly surprised.
Thanks for the reply! I'm curious where you draw the line between derived data and a full application for something like modeling climate risk for insurance?
It’s blurry. Can you take an action based on information presented to you from within the software? Probably a sign it’s an application at that point and not just a data feed.
One thing that comes to mind for me is GasBuddy. They simply show me the raw data feed (price of gas at various stations). It's up to me to decide based on my MPG, distance, price, etc. what is worth fueling up at.
Who are you to tell a company what they can sell. If they want to sell the raw data so people can do whatever they want with it, then sound reasonable. If they also want to sell data with some analysis already applied so that other people can buy that data because they don't have in-house for it and just want pretty pictures to put in the deck, then why not sell that too?
You don't always have to order the biggie fries and drink, you can just order the standard meal.
I can get not how this might get lost if you are looking at it from outside the industry. Realistically, Planet should be in the business of selling pixels in mass. However, Planet has decided that isn't enough for them, they want more of the value add. This is built into their licencing/ ToS. However, their black box analytic solutions simply aren't good enough. They never have been and if they arent willing to give you a full chain of custody of every assumption a given algorithm uses, it never will be. Any solution that isn't completely translucent isn't good enough.
If they wanted to be crushing it, lower the cost/ barrier to entry on the pixels themselves. Get it out into peoples hands and use more open, easier to build on licenses. Let people actually use the data.
I've been on the other side of 5 failed attempts to work with Planet at large, medium, non-profit and startup scale projects, as early as 2016 and as late as 2021. They just don't get it. If your data isn't easy to use, I wont. If your license is going to prevent me from building what I need to from that data, I wont use it.
I’m sorry to hear that. Planet is a company I admire quite deeply. I think they are well aware of the difficulties folks like you have had working with them in the past and I’ve seen firsthand the lengths they’ve gone to remedy that once made aware. Their new product organization is very focused on taking things in a new direction that I have high hopes for.
Having worked in the sensing analysis market for a long time, I would frame the recurring business problem as follows, and it applies to sensing data generally, not just satellite imagery:
There is a large market gap between the sensor platform providers and the would-be end users of that sensor data. Business is constrained by the fact that the end users are not capable of consuming what is being produced and neither party in the transaction is really qualified to solve that problem.
Sensor platform companies can't make their data more consumable because they only know their data, not the analytic nuances of the end users nor the other sensor modalities that the end user may be using. They try, with the hope of improving revenue, but it almost never comes off. The end users are experts in their analysis problems and application domains but usually have no skill or capacity to write custom performance-engineered infrastructure software to bring the sensor data into their domain. This software often has pretty hardcore computer science requirements and there is minimal tooling, certainly not in open source, to make this easier or more scalable. There is clearly a market opportunity, but neither of the obvious parties is in a position to address it even for their own immediate benefit.
There are a few companies with expertise in building high-performance data tooling for analytic workflows across sensor modalities with enough application understanding to bring it into the end user's domain. There is a lot of commonality across application domains from the view of tooling requirements but you have to have expertise in several vertical domains to see it, so it is repeatable. These companies have neither the data nor the application but they bring them close enough together that they can "make the market" as it were. Currently this is a high-end contracting business rather than a general platform, so it doesn't scale. But if this tooling became part of a general platform, it would let the sensor data platforms focus on data collection without worrying about how to make the data more consumable for various application domains, which they were failing at anyway.
The state of software tooling for these sensing data applications is primitive, overfitted for single data sources, and not nearly scalable enough. Most other problems in the sensor data market are consequences of this reality.
I think this is a fair summary. Don’t sleep on GEE, Orbital Insight, and Descartes Labs, though. All three have incredibly impressive platforms that can spin up new custom models and deploy them at scale very quickly. And Microsoft is starting to make a major push into the space with their Planetary Computer initiative. Early innings yet, though, I agree about that.
A challenge is that there is often an increasing use case to blend remote sensing data models with active telemetry data models, since they are complementary for many (most?) analytics of this type. The remote sensing platforms have no capacity to deal with telemetry data models, and the telemetry platforms have no capacity to deal with remote sensing.
The workloads and computer science are very different, but bringing them together would add a lot of value. At least with current platforms on the market, one is always to the exclusion of the other.
Me: 2 years at a satellite imagery company a few years back
This is mostly true, though I will add there are some repeatable insights that people will pay for. In the financial sector, there were plenty of people who paid for car counts data back in 2016 (though many customers did not renew) and there are still people today who are interested in oil inventories data, and mall traffic data (though satellite data is being augmented there these days). The general theme of the piece is correct imo though. When the company I worked at started, we thought there would be many use cases, it became clear after a while there were only a few insights with large markets, and after that a really long tail of custom asks.
Thank you for sharing that, I find it very validating. Sometimes it’s nerve racking to publish an opinion piece like this when I know I am extrapolating from very limited and biased information. Sounds like you worked at Orbital Insight, Descartes Labs, or SpaceKnow (in order of likelihood)?
Yes i did work at one of the above. The car count data was actually more interesting than people even realized. One thing even many quant funds missed was that the quality of our 'raw counts' still depended on cloud cover which was meta data that was for whatever reason something we didn't share (no one asked for it), so the quality of the signal depended on the geography. Ie sunny places had more signal - there was even a short window back in 2016 before credit card data was more heavily used, that you could predict the earnings results of retail chains in socal pretty well with our data.
To your article's point, I believe my company ended up trying to switch to a platform that allowed people to create their own insights, but im not sure how successful that was in the end
I am not rooting for people to fail. We’re building an industry together, not playing a zero-sum game
Of course this is not the way many folks see industry competition. I think, for example, Intel, AMD, and nVidia can all "win." In fact, when one improves they can all move forward.
The satellite imagery industry is quite small in terms of the people who work in it. I love and respect the people who have built the same products that I am indirectly critiquing in this piece.
This is a cool industry, because most of the effort is going toward things like monitoring the effects of climate change, or mapping natural disasters in real time to support crisis response, or illuminating human rights violations around the world. Rooting against the people working on that is icky.
In my opinion, we're all competing against obscurity (who buys satellite data today?!), not each other.
I predict this changing in 5 years or less, as demand increases and the profit potential attracts new money. Hope I’m wrong about that, because as you say there’s so much untapped potential value for a bunch of disparate, but critically important fields.
As someone who just got out of one of the thousands of "new" startups to now be at a boring run of the mill software firm... i grapple with this a lot. 90% of the work I did was on defense stuff (i think you wrote about it before) but that 10% supporting NGOs or deforestation or ESA/NASA research grants always keeps gnawing at me to run back into the space. Its tough to tell if a RS company really has their money where there mouth is on stuff like that until you work there.
I don't mind you saying this - I think my style is very off-putting to some. The same things that disenchant you about the style are what make others love it. I try to be entertaining and informative, rather than just informative, because that's what makes it enjoyable for me. But like all entertainment, it's stylized, so it will put some people off. Hopefully you found the content valuable, at least!
For me, its not so much the presence of animated gifs as much as I wanted to be able to scroll one off of the screen while I focused on the words. For me, the top 30% of the page had something moving no matter where I scrolled. I can appreciate style and levity but for me the busy screen was enough to drive me away. Looks my Reader mode would have blocked all of the images and should have just gone with that ;)
Yeah, I have decided I'm just too old for this style. Something wants to be serious, but then it tries to be cute by adding all of the GIFs that do nothing to enhance. I'm all for adding media in whatever format that adds to the understanding of the material. These kinds of garnishes are no better than the animated backgrounds of MySpace and Web1.0 days. Now, get off my lawn!!!
A year or so ago there was a long-ish discussion about countthings.com—technologically, a very simple app that's seemingly found a lot of value in having a close product-market fit and being able to (efficiently) sell to customers that don't have their own ability to build custom CV solutions. https://news.ycombinator.com/item?id=27261399
I really understand the author's argument that lots of satellite companies are "doing this wrong", by investing lots and lots of resources into "new products" that cost a lot of money to produce but don't have a clear user story, but I wonder if there's a way to do this "right" by building very simple, customizable software that lowers the floor in terms of what sorts of customers are able to purchase satellite data feeds? Maybe even using the same software the CountThings does?
This seems to be qualitatively very different from the types of "data feeds" that OP is talking about that try to measure "useful analytics", rather than working on a scalable process for shipping bespoke solutions to customers with turn-key integration. This is (one way) to tackle the long-tail problem. But maybe it runs into some other pitfalls
Great observation. Check out Descartes Labs (all-purpose platform) or Picterra (closer to CountThings for sat imagery) as two examples of companies trying to make it easier to build personalized models.
I found the argument(s) interesting although the animations hurt readability and clarity.
I've watched the area of commercialized remote sensing products with some interest, because I'm in the non-commercial remote sensing line.
The way NASA handles derived data products is (broadly) through "levels":
L0: measurement, still in instrument native units (e.g., DN's)
L1: calibrated and put into physical units (e.g., watts/cm2)
L2: scientifically-useful product (surface reflectance at 450nm, CO2 concentration)
L3: L2 that has been aggregated into a map, possibly also across time or instrument
L4: data has been filtered through a time-stepped physics-based model
The lowest level that's commonly useful for applications is L2. A decent-sized satellite might have several L2 data products serving different user communities, e.g., CO2 concentration, methane concentration, and photosynthetic activity can all be recovered from remote-sensing spectroscopy, but they serve different uses.
One advantage of the above decomposition is that L2-L4 data can be validated with in-situ measurements. They are not just indexes -- they are targeted at a certain physically-measurable quantity.
This allows judgement whether the intermediate products (L2 CO2) are actually good, or improving. It also allows combining intermediate products from different sources (which is a hard problem). This is because both sources are trying to measure the same thing by design.
It is true that (for example) current spectroscopic remote sensing allows retrieval of a lot of L2 products for diverse communities -- scores of products, from mineral abundance to urban land use to agriculture to snow/ice to algae.
I do agree with OP that it will be impossible for any company to "cover the waterfront" of even half of these products. The measurement and each individual product take a lot of effort to get right.
But it also seems like there are commercial opportunities for some specific such products -- e.g., methane concentration/fluxes, or Evapotranspiration/soil moisture.
Wouldn't a subscription-based service to these products allow for continuous improvement of the underlying product, either through new measurements or through better algorithms?
So, in a nut, in the context of OP, what's the difference between:
-- an always-improving subscription-based "vertical service" for a L2 product like I just described,
vs.
-- a "problem-solving application" like the OP is advocating?
Thanks for taking the time to write this out, great info.
There are two hallmarks of an application that differentiate it from a data service:
1. Earth observation is a minority of the data that it manages and maintains
2. Users are not just presented with information, they are prompted to take action
I would argue that levels L3 and L4 are probably falling into the same trap as the data feeds I described in the blog post. Do you know if USGS publishes download metrics are available for each dataset associated with Landsat, for instance? I bet if you made a ratio of time/investment to downloads, you'd find L2 outperforms all other categories. But I could be wrong; I have never seen the download data and don't know the relative levels of effort to produce each dataset they offer.
OK, in your hallmark #1, I suppose you're distinguishing between, say, topographic or land-cover maps that come from remote sensing and that might be useful in flood risk assessment, versus property-valuation or storm-drainage infrastructure maps that don't come from remote sensing and that might also be needed for flood risk assessment?
I.e., the value is in a complete "application" solution to a given problem rather than in hoping for a "killer remote-sensing data product" that would (in theory) solve the problem?
About your question, I'm on the analysis side, not the infrastructure side, so I don't know about the download metrics - they must be tracked but a quick search didn't turn up anything accessible. The download metrics aren't super-valuable because lots of people/groups do programmatic downloads -- the cost is zero.
I think the L0-L4 distinctions is trying to illustrate these points:
-- L0, L1 data are sensor-dependent and not really useful to solving problems
-- There's a lot of value in developing calibrated data (L2+)
-- Basic visual imagery often isn't calibrated so subsequent processing will necessarily be ad hoc and thus of limited value for any consequential decision-making
I'm reminded of the Simpsons episode where the government says they used satellites to look for the trillion dollar bill at Mr. Burns's mansion, but all it told them was it's not on the roof.
Underpinning much of this is the costs associated with image capture and the lack of fine-grain temporal resolution. You can write software to analyze and pinpoint certain "things" the customer is looking for, but in all cases that analysis is a snapshot of a moment in time. Stringing along multiple snapshots helps paint a broader picture, but revisit rate is still a few times a day in the BEST case (and the customer likely has deep pockets if that's true). The bottleneck is image capture.
The satellite industry is an incredibly high-walled garden. Launch costs are falling and the use of off-the-shelf hardware combined with smaller satellite forms is improving revisit, but it's a steep hill to climb no less. Custom-built monitoring algos are subject to diminishing returns based on a variety of factors and the number of images in a data set is certainly part of that.
Agreed. I’m biased, but keep an eye on synthetic aperture radar. The physics of those systems allow for monitoring with a pretty different theoretical limit on price-per-revisit from their optical companions: https://joemorrison.substack.com/p/why-im-leaving-azavea-to-...
Not broadly true in my opinion. To give an example from the financial services industry:
* Satellite provider or analytics firm sells "car counts" for retail stores
* Financial institution is intrigued; this must correlate with sales, right?
* But..why don't we just buy credit card transaction data and foot traffic data from clearing houses and GPS trace providers?
I often liken satellite imagery to salt. It's great to finish a dish, but should never be consumed alone. If you don't believe the foot traffic data or credit card transaction data you're buying, you can use satellite data to check it or refine the model. But that's a niche within a niche.
The other issue I pointed out in that article is customer savvy--if you're a quant fund sophisticated enough to make use of an arcane data feed, you're very likely sophisticated enough to generate that feed yourself from raw data. And if you do it yourself, it's suddenly part of a "proprietary" solution. So you'd rather just buy images and do the heavy lifting that pay a premium to buy a data feed that doesn't quite solve your problem by itself.
This was our experience with car count data which we sold to most of the sophisticated quant funds a few years back. We sold 'raw counts' and our own 'derived counts' that took into account cloud cover etc and was produced by our data science team. The quant funds completely disregarded our derived counts as useless.
Having been on the other side of that... quant funds absolutely want raw data wherever possible, but I suspect for the application you're talking about there's some room for derived data; the responses for e.g. credit card data are relatively infrequent (earnings are reported 1/quarter) so using high-dimensional, high frequency data can be more trouble than it's worth. Genscape is a good example of a company with derived data that everyone in the industry subscribes to.
I think this applies to a lot of types of data. Selling "insights" is really just selling a filtered version of the raw data. If the client's needs match up with your filters, then you've saved them a lot of work. However, if the filters don't match up perfectly with the client's needs then they'll need to do filtering and processing themselves. And in that case, they'll likely get better results just working with the raw data, instead of whatever data you think they need.
I suspect it is at least approximately true of all data sets that are both broadly applicable and difficult to acquire (i.e. requires domain expertise to acquire).
I want SpaceX to put cameras on their Starlink fleet and provide near real-time satellite views via a Google maps interface down to 30cm resolution across most of the planet. As they phase out old birds and deploy new ones this is feasible for them. Each bird already has connectivity. They just need the cameras.
Worldview-4 (a commercial 30-cm satellite) had a reasonably similar orbit and 10 times of the mass of a Starlink satellite. Even if they are able to significantly improve on that design, it's still going to be a pretty big "just need the camera".
Unfortunately, physics won't allow for that - with optical instruments you need a big aperture (lens) or you need to fly much lower than Starlink, or both. 1 or 2m imagery may definitely be possible, though! Check out BlackSky for reference.
If you can assume the scene you're capturing is fairly static over time, you can sometimes sort of cheat physics a little bit (keeping space constant and leveraging time) by resampling areas multiple times from different perspectives and cleverly combining that data together
Problem is that most the interesting bits people want more data on are quite dynamic in space and time, not just time. Even when it's not you don't gain linear subsampling improvements and eventually get diminishing returns with such approaches.
The companies mining iron, and those that convert the raw iron into specialist equipment are not the same. So why do we expect companies that capture satellite imagery to create solutions on top of it?
Some year ago on HN was a satellite imagery company offering free trial. I was curious about the construction site of my house, whether I can discern where trenches were and such. But their interface stumped me, took time to figure a way to search the dataset by coordinates, then got some zip dataset download and then had again to figure out how to open images. Nothing on them was recognizable.
If they really want new customers they should make this more accessible to laypeople.
This seems like the kind of insight that is better monetized than preached - the value will not be perceived until there is a business success that derived from it.
I sell satellite imagery. I want more people to start application companies, and less people to start data feed companies. "Preaching" is the most scalable way to try to convince people to change tack. I can't personally start dozens of application companies, but I can hopefully help spur the founding of those firms through my writing.
> I want more people to start application companies
Are you aware of ways for a hypothetical start-up team to get cheap/affordable raw imagery data to attempt this? Instead of paying B2B prices upfront just for experimentation?
It’s a little bit like saying “the photography industry still has no idea what customers want”. That’s the kind of thing open markets figure out. There are data providers of all kinds of data, and the data consumers need to find this data somehow.
It's so much cheaper to take photos on land vs. in space. Seems like that price difference and what people do with satellite photos vs. handheld camera photos makes them pretty different.
working in the agribusiness industry. I can tell you that satellite imagery solves many problems. But just satellite images won't help. You need to analyze and calculate various indices and indicators. I can say that the accuracy of analysis is high and it is definitely worth spending money on this software. Here I found some interesting information about high resolution imagery https://eos.com/products/high-resolution-images/
working in the agribusiness industry. I can say that satellite imagery solves many problems. But just satellite images won't help. You need to analyze and calculate various indices and indicators. I can say that the accuracy of analysis is high and it is definitely worth spending money on this software. Here I found some interesting information about high resolution imagery https://eos.com/products/high-resolution-images/
I want worldwide 24/7 live video footage. I want with enough resolution to identify letters on a printed page on the ground. And, I don’t want it to cost the entire global GDP to make and operate.
One frame per 24-48 hours is unacceptable.
I said I want world-wide 24/7 120hz video, not “in the footprint sometimes by chance” 10km^2 at-a-time imagery.
working in the agribusiness industry. I can tell you that satellite imagery solves many problems. But just satellite images won't help. You need to analyze and calculate various indices and indicators. I can say that the accuracy of analysis is high and it is definitely worth spending money on this software. Here I found some interesting information about high resolution imagery
Even more complicated and less obvious than some of the other sorts of Earth observations that the author talks about.
I could write about this all day, but I'll start with the obvious: you're competing against a forecast that may have been made 30 minutes ago, an hour ago, 6 hours ago, 24 hours ago, or sometimes even more. So in some cases, at best you might be alerting people that something they already expected to happen is, now, actually happening. How useful that is depends on the context. Detecting a wildfire as it ignites? Cool - but most likely, if it's near an urban area, people already saw the smoke, or people were already ready to react because a Red Flag warning was posted. Lightning strikes? Folks already heard the thunder, and hopefully would've seen a risk of thunderstorms in the forecast earlier in the day or prior.
Carefully and succinctly incorporating narrow weather observations into existing forecast and alerting systems as a way to buttress them, decrease noise/boost signal, or otherwise capture a tiny bit more value than what was already there might work. But beyond that I struggle to see massive amounts of value for most of the use cases that many industries or communities wrestle with regarding weather alerting.
Weather data is usually derived from geostationary satellites which is sort of an adjacent field to the lower-orbit imagery the article is based on (i believe?)... but i know of a couple of projects doing analytics here - not sure about commercial potential though, they're early stage startups or academia.
You are correct. I’m talking mainly about low Earth orbit satellites. There are some LEO weather satellites or concepts - Tomorrow.io is a notable one.
You are right - I wasn't really writing it for a general audience, more for the few thousand people that are working every day in the commercial satellite imagery industry. There's no specific context to understand (beyond that I repeat myself constantly, so for regular readers of mine none of what I outlined in this essay is news).
This is basically right. The problem with space imagery is that almost everyone who wants it has a niche use case, and those few organizations without a niche use case (the US Weather Service, various militaries, etc) generally want imagery that's so specialized to their own problem that they have to spec, buy, and operate their own orbital assets.
Take Ukraine as an example. Leaving aside the moral question of whether a satellite imagery company should be profiting off the Ukrainian war, Ukraine appears to be using commercial orbital imagery providers to figure out Russian troop movements. That use case is not one any commercial provider anywhere is going to build an ML model for. But analysts working on behalf of Ukraine can absolutely either use raw pixels or develop their own ML algorithms that run on top of the raw pixels to find Russian tanks.
And almost every other potential user is similar. They're all looking for something different. Oil companies want to pre-screen drilling locations. NGOs want to look at deforestation in Brazil or methane leaks in Saudi Arabia. You could even go all the way down to individuals -- at the right price, individual farms might want to look at relative growth rates of corn in their fields, or soil moisture levels, etc. Or they might want to count heads of cattle or sheep, or... or... or.
The point being, outside of weather, which we already know how to get to end users without having them subscribe to an orbital imagery provider service, every customer is different, and what they want from the pixels is different. It's basically the long-tail problem. In order to be profitable you have to fill an enormous number of niche use cases.