During the hackathon the team only did a simulated flight, not a real flight, so take the results on effectiveness with a grain of salt. In any environment with significant seasonal changes, localization based on google maps will be a lot harder.
Each 5 days, a satellite from sentinel mission take a picture of your location. It's 8 days for landsat mission. Those are publicly available data (I encourage everyone to use it for environment studies, I think any software people that care about the future should use it).
It's obviously not the same precision as the google map, and it needs update, but it's enough to take in account seasonal change and even brutal events (floods, war, fire, you name it).
I don't know where you live, but the default search at https://apps.sentinel-hub.com/eo-browser/ use sentinel-2 (the true color), 100% maximum cloud coverage (which mean any image), and the latest date.
So you should be able to find the tile of your region at a date close to today, definitely not 4-6 month.
Occurred to me that in a war or over the water this wouldn’t be useful. But I think it will be a useful technology (that to be fair likely already exists), in addition to highly accurate dead reckoning systems, when GPS is knocked out or unreliable, as secondary fall back navigation.
Why do you say that? Navigational techniques like this (developed and validated over longer timeframes of course) are precisely for war where you want to cause mayhem for your enemies who want to prevent you from doing that by jamming GPS.
This is not just an idea but we have already fielded systems.
> over the water this wouldn’t be useful
What is typically done with cruise missiles launched from sea that there is a wide sweep of the coast mapped where it is predicted to make landfall. How wide this zone has to be depends on the performance of the innertial guidance and the quality of the fix it is starting out with.
For the human eye maybe. For a computer using statistics less so. Extracting signals under a mountain of noise is a long solved problem - all our modern communication is based on it.
That is all really interesting speculation, but I'm not describing a system which could be, but one which is already available and fielded. In cruise missiles it is called DSMAC.
Basically inertia guidance enhanced by terrain matching. Which is great, but terrain matching as a stand-alonenis pretty useless. And it still requires good map data. Fine for a cruise missile launched from a base or ship. Becomes an operational issue for cheap throw-away drones launched from the middle of nowhere.
Well if you combine it with dead reckoning, I guess even a war torn field could be referenced against a pre-war image?
I mean, a prominent tree along a stone wall might be sufficient to be fairly sure, if you at least got some idea of the area you're in via dead reckoning.
And deadrecking is already standard in anything military anyways. For decades.
As an added data source to improve navigation accuracy, the approach sure is interesting (I am no expert in nav systems, just remotely familiar with some of them). Unless the approach was tried in real world scenarios, and developed to proper standards, we won't see it used in a milotary context so. Or civilian aerospace.
Especially since GPS is dirt cheap and works for most drone applications just fine (GPS, Galileo, Glanos doesn't matter).
For a loitering drone I imagine dead reckoning would cause significant drift unless corrected by external input. GPS is great when it's available but can be jammed.
I was thinking along the lines of preprocessing satellite images to extract prominent features, then using modern image processing to try to match against the observed features.
A quite underconstrained problem in general, but if you have a decent idea of where you should be due to dead reckoning, then perhaps quite doable?
You can't use visual key-points to navigate over open water.
You can use other things like Visual Odometer, but there are better sensors/techniques for that.
What it can do, if you have a big enough database onboard, and descriptors that are trained on the right thing, is give you a location when you hit land.
And for a little less, you can buy the original, from Analog Devices.[1]
Those things are getting really good. The drift specs keep getting better - a few degrees per hour now. The last time I used that kind of thing it was many degrees per minute.
Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly. A drone with a downward looking camera can get a vector from simple optical flow and use that to correct the IMU. Won't work very well over water, though.
>Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly.
An INS will usually need some kind of sensor fusion to become accurate anyways. Like how certain intercontinental ballistic missiles use stars (sometimes only a single one) as reference. But all these things are based on the assumption of clear sight and even this google maps image based navigation will fail if the weather is bad.
Seems like a really good use case for downtown anywhere, because otherwise the buildings make your gps go haywire and navigation sucks. Though I will say, and certain providers try not to admit this too loudly, a much better way to handle those downtown scenarios is to map out and keep track of the location of WiFi signals and use their relative strength as detected by your device to triangulate your position instead. Works really well in super dense areas like cities
Apple too, for a long time now. I used to use my 2015 iPad WiFi edition for navigation despite it not having a GPS chip. It worked surprisingly well as long as buildings were around.
IIRC many cruise missiles do a similar thing, except with radar and topographical maps instead of cameras and photos. Obviously the cruise missiles cost a hair more as well.
First generation Tomahawk did this (TERCOM) but later blocks acquired image matching guidance which is presumably today very advanced (I heard about it first around 1988).
The YouTube channel TheOperationRoom has a very well animated play by play (almost to the minute) of the air and ground war in gulf war 1 in Iraq/kuwait. In one of the videos he mentions tomahawks having to take a long route into Baghdad because it was the only one with enough terrain for the missile to follow to the target. This was 1991 so maybe older inventory.
Tomahawk missile routes early on took quite a bit of planning because of this. They could only follow terrain that they’d recognized. It had other capabilities (mostly inertial at the time), but the “clever” stuff came from following well map routes.
Back in the day, TI had version of their Explorer Lisp machine contained on a single 256 pin chip. These were being considered for the next generation guidance packages on the tomahawk.
The drone uses a camera mounted underneath it to position itself with imagery from Google Maps highlighting similarities in the images to get a rough estimate of the co-ordinates. Doesn’t Google Maps still require internet, you may ask?
Google Maps allows users to download segments of maps ahead of time, usually for use when you are travelling or camping out in remote areas. In this instance, the team used this feature to their advantage, allowing the drone to continue operating regardless of having a GPS satellite connection.
The entire point of such a build is to operate autonomously with local data in the presence of jamming | signal loss for other reasons.
You can download osm and satellite datasets for thousands of square miles in a few GB. If you need very high resolution satellite it's gonna be on the order of 1GB per ~10sqmi depending on what you're up to.
Not sure how they do it but I think it's quite feasible to extend a kalman filter to include camera intput referenced to an image. E.g. https://doi.org/10.3390/aerospace9090503
This article strongly reminded me of the "helicrane" system developed by NASA for the last Mars probe, which oriented itself by recognizing the terrain below it and using that to orient itself toward the right drop location: https://www-robotics.jpl.nasa.gov/what-we-do/flight-projects...
Fun stuff. Wonder if there is any requirements for how "bumpy" the terrain needs to be to get recognized properly by the system.
This is already being utilized in Ukraine since GPS guidance doesn't exist in large areas. I've also read that both Switchblade and Lancet drones have this ability.
The future of warfare does seem to be incredibly cheap autonomous (or nearly so) drones. I know Palmer Lucky's working on this, but his offering is still incredibly high priced...
Who's looking at this problem with the perspective that these should be 'ammo' and not 'aircraft'?
I wonder: if you can just measure the gyroscope readings and velocity, then knowing a starting coordinate, could you just calculate the current coordinate? What are the unknown parts of this equation which make a sattelite / other methodology a must?
The Kalman filter[2], which pops up on this site rather frequently, is often used for updating the current state.
The issue is that sensors are not perfect. Not just noise, but they might be slightly off in one direction more often compared to another for example. These errors accumulate, and so you can be quite off after not that long of a time.
Almost always there's good reasons if a seemingly-obvious solution isn't much used. Sometimes it can be hard to understand why without a lot of in-depth knowledge, as it can depend on somewhat non-obvious aspects, typically in the practical realizations like in this case. With perfect sensors, dead reckoning would work like a charm.
However I find that thinking about the problem, coming up with these what-if solutions and then figuring out why they don't work, can be quite fruitful. Often leading to new knowledge that might actually be relevant down the line, and if not it's a great exercise in problem solving.
You can only measure acceleration using an accelerometer. A gyroscope is technically still useful here for removing the gravity vector from the data accelerometer is reporting.
> You can only measure acceleration using an accelerometer. A gyroscope is technically still useful here for removing the gravity vector from the data accelerometer is reporting.
You can combine accelerometer and gyroscope data to give you a (position, orientation) signal that is more precise than either, using a Kalman filter. (You can add other sensors in like magnetometer too)
Unfortunately even with this technique there is still drift that over time accumulates without bounds (the error gets so larger you become completely lost). If you can navigate from known landmarks you can eliminate this drift however. Such landmarks include the stars (if you can see the night sky), buildings and topographic features like in this post, or GPS
Your position would drift insanely fast. In a drone, gyro readings and velocity are being used in the ekf2 filter to calculate your position. But a GPS position is required to prevent longtime drift.
Obviously, in GPS deprived situations, you have to think about alternatives.
This approach is used very commonly - e.g. every airliner, submarine and military plane will have such a navigation system (augmented with GPS these days obviously).
The issue is that the system naturally drifts, so over time it will accumulate error which has to be mitigated somehow (e.g. in military equipment with finding a known landmark and fixing on it).
Yes, this is how navigation was done "in the past", without GPS, using inertial navigation systems. And pilots would recalibrate their position along the way or before arriving "on target" using known landmarks. The fancier systems could recalibrate using known star positions.
The context here is the design of a navigation system that works without GPS.
Once you have a functioning GPS, the INS is not needed at that point.
The INS can run in standby mode and be continuously calibrated using the GPS positions. If GPS is lost, the INS is used as a backup. But at that point, you no longer have the GPS velocity. So the normal thing to do is to integrate the acceleration data from your accelerometer, which gives you the velocity (+noise, +drift).
The story is that these young engineers built a 500$ drone that consumes this kind of data to do mapping. In 24 hours for a hackathon no less.
If the US government didn't already have this kind of tech, they would spend millions just for the same prototype they built. And probably tens or hundreds of millions for a final product.
It's easy to shallowly dismiss work with a metaphor for how it works. LLM training is like a marble rolling down a valley. But hacking together a working proof of concept warms the heart.
There are so many problems with this article. If an area is so "remote" it doesn't have GPS coverage, the Google Maps imagery quality is going to be garbage as well. It claims it works day or night but how does it work at night? Not only will detail from the camera be reduced but the available satellite imagery will be greatly reduced in detail as well. If the team has only conducted a simulated flight isn't the headline just plain wrong?
Maybe I missed it in the article but does the processing take place on the drone or at a control station?
>if the area is so "remote" it doesn't have GPS coverage
Or if GPS signal is being jammed.
>imagery quality is going to be garbage
Presumably, this relies on fairly limited compute resources, so it maybe probably downscales the image from it's camera to something like 256x256.
Also, for machine vision stuff, too detailed an image can produce a lot of noise, so you'd need to pass it through some kind of low-pass filter anyway.
GPS is calculated from signals from a range of satellites far above us in orbit in space. It's not like cellular towers with coverage. There are no real remote areas with GPS - some areas and times of day will have less satellites overhead so calculating a position will be harder. There's more and more of these satellites (from usa, russian, eu, china etc) and modern chips will try calculate position from the signals from many of them rather than one "species"
But yes, remote areas will have less mapping imagery quality because they are remote and uninteresting and don't have roads or people or companies to buy ads on the map. They will buy cheaper imagery in remote areas.
In the context - a hackathon sponsored by the US Department of Defense - it has great utility. GPS (and any radio-based location system) is quite vulnerable to denial by enemy jamming. This technique would by extremely relevant in a war between peers, and probably provides a much lower cost option than high-precision inertial navigation systems.
Tangentially, this isn't really new - ground-pointed RADAR system combined with a good terrain dataset can also be used.
Now, had they made it happen _on the drone_ that would be more interesting. It looks like it was a simulated flight, especially as there is no rolling shutter wobble. Moreover, if you want real time, you need a monster GPU, unless they've done something clever.
I’m probably being a little over-pedantic, but I think a fail-safe refers to a feature that puts a device in a safe state when it fails. For example, a good fail-safe device for a drone could be one that navigates it toward the ground and into a somehow out of the way location (away from people or roads where it could be a hazard).
This seems more like a potential device recovery feature.
If anything, it is a fail anti-safe… moving around with degraded navigation is always worse, right? Might miss the fact that you’ve entered restricted airspace…
For example: pre-load satellite images and restrict navigation to radius covered by said images. In case of GPS failure, fail-safe to using image navigation for getting back "home".
The sponsor is the military so the user case is probably a war.
> The problem set given to the hackathon teams from the U.S. Department of Defense and the National Security Innovation Network (NSIN) focused on counter-UAS, AI/ML, and RF/Electronic Warfare.
All sort of landscape changing events happen in a war. Buildings burn or are bombed and leveled, dams fall and terrain is flooded, etc. The military won't rely on the satellite images in Google Map. They'll load their own fresh images. If they're not fresh enough, they'll wait until they have them.
In reports about the war in Ukraine it's common to read that FPV drones don't fly because of adverse atmospheric conditions, which could be too much wind, fog or rain. Or too many electronic warfare systems from the enemy, which is one of the points of this exercise. The drone will be shielded.
clearly could be a problem...but you are really looking at feature extraction / terrain analysis and best guessing with what is available (little different to a human doing the same job). The main thing here is that storage is so cheap and light that these these systems become easily possible. Terrain, features + building heights...filter if you need on building height / terrain...work with what is left...
How do you even print an object that large in 24hrs, nevermind actually testing/iterating on the design? It seems a little disingenuous to say those aspects were done in 24 hours unless I’m missing something.
It’s multiple parts so multiple printers, I’ve printed some plane parts from lightweight PLA and this seems pretty reasonable to me. Especially so if you imagine prior experience and multiple people. It would get tricky if you had to iterate the design though.
Unless your drone somehow makes it to Antarctica that would be hard to imagine happening, and even if it did, it could just continue on in one direction until it sees a feature again. Or fly higher.
I’ve driven across the Great Lakes when frozen but even from the ground there are landmarks you can see.
Well, I've flown my (crappy) drone over the Greenland ice sheet. There are features in the ice sheet but they change regularly (once you're at an altitude high enough so there are no melt lakes). Nunatak and such aren't all that common away from the coasts. Antarctica would be worse on the plateau away from the transantarctic.
(There are other practical problems like how poorly drones do in the climate).
The coast guard actually samples the ice and marks lanes. There are islands that people live on up there and when it freezes enough you can drive back to the mainland.
There were lots of people who just did their own thing but yes, that would otherwise be a risk without testing and I’m sure too cautious to be one of them.
Now though I don’t think it gets cold enough where I was at (Lake Erie near the Ohio/Michigan border) to freeze enough in the winters anymore. It’s been a solid 20 years since I knew people who drove on it so I’m not sure it’s still a thing.
NYT actually wrote an article about how the lack of freezing is killing a brisk (pun intended) winter business there and mentions it. I used to have family who lived on the next island north.
At research stations you are likely close to skiway and would be unlikely to get permission if you asked, I think. Not sure what official NSF policy is, since I didn't have a drone the times I've been in Antarctica, but I was able to get permission to fly my drone near Summit Station Greenland (by asking the camp manager).
In a sense, they are right. There are several coordinate systems, but the one everyone uses (WGS84) was developed around the time of GPS and intended to be used for that purpose.
So GPS coordinates isn’t really a bad way to describe them.
This is not what the anybody was remotely talking about, but I am feeling pedantic, and I thought it a fun fact. GPS native coordinates are actually a 3d Cartesian grid, however polar coordinates are so much more convenient to us living as we are on the surface of a sphere, I don't think anything outside of the deep internals of a gps receiver use the raw ecef coordinate system.
Unless I'm missing something extraordinary this is:
- lacking any details of technical implementation
- full of ads
- substantiated only by a single thread on X, which is where this article should link to (much as I dislike X), not to a page full of ads.
$500? 1 day? Cool. I'd love to know more; unfortunately, there is no more to know, because this page is just click bait and arbitrary statements like:
> which uses cameras to position itself, using imagery that doesn’t rely on light to work means this drone can fly anywhere in the world it has imagery for at any time of the day or night.
> The team built this impressive drone during the El Segundo Defense Tech Hackathon hosted by 8VC, Entrepreneur First, and Apollo Defense in collaboration with the. <--- the what?
These are obviously wrong and probably AI generated.
X threads aren't unfortunately very useful for people who don't have account on X. Linking to the original source is much better. If the original source is only X thread, authors should reconsider their life choices. It's never too late.