A* is less useful when you're not omniscient, that is, testing if a cell is blocked has a sensing cost. I ran into this in a game application. To find out if a cell is obstructed, I have to do a ray cast at a few points in the cell, which uses resources. A* requires sensing a large number of cells to collect non-useful data, and if you have a big, mostly open space with some obstacles, like the real world, it does far too much sensing.
So I ended up with a variant on Pledge's approach to wall-following. Head toward the goal until an obstacle is detected. Then, start wall-following, but simultaneously in both left and right directions. When one of the wall-follower tests can head towards the goal, do that, and kill off the other wall-follower. So you alternate between heading towards the goal in open space, cheaply, and wall following.
Searching both left and right simultaneously avoids taking the long way round some obstacles.
I'm a bit confused by your comment, when you say that "testing if a cell is blocked has a sensing cost", do you mean the obstacle/wall is not in your graph at the point of search? A properly calculated navmesh by definition has the obstacle/wall 'cut out' of the mesh (or otherwise represented in a modifier area). Perhaps I've misunderstood you but it sounds like you're trying to build, at least partially, the navmesh as you search?
If you're rather talking about temporary obstacles like other nav agents that need to be avoided, there are a number of approaches to agent avoidance that work nicely on a subset of a navgraph.
sounds like you're trying to build, at least partially, the navmesh as you search?
Yes. I'm coding non-player characters, using an API which lets them sense their surroundings by ray-casting but does not give them direct access to the system's world model. They're limited in what they can sense.
That sounds like an issue (and a big issue at that) with the API your engine is providing to you, rather than an issue inherent in A*! Building/rebuilding a navmesh is the expensive bit, cell look-ups ought to be essentially 'free', performance-wise. In an engine like Unity a physics raycast is a super expensive call to make, so if that's also the case for your situation then that's a real shame, I hope you can work around that.
If I'm doing regular "were are other objects relative to an object" I much prefer an ordered-spatially data structure like a quadtree, but doing a quick hash of the position and sticking it in a key based store works fine too. Again, that's one of those data structures that is expensive to build and cheap to query, so only building it once every x frames is probably a good idea. But again, not sure what your API is giving to you and how low level you get to work.
I was genuinely interested, and, wow. It's been 15 years since I've tried Second Life but I'm not terribly surprised the scripting/modding looks like this, and all your comments about the exposed API make sense now.
g(x) represents the costs that have been incurred thus far, since the starting point. How you wish to quantify and evaluate that cost is up to you as the implementer. For spatial navigation purposes, most people opt for “cost = Euclidean distance traversed”, but if Euclidean distance is not the only thing you’re trying to minimize, then your cost function must take other factors into account.
But look, the goal is "find the path that costs the least to traverse". The stated problem with A* is "running A* is too expensive". Why mix the outside-the-universe cost of running A* into the within-universe cost of traversing the path?
If you've already paid the cost of scanning a path, the cost of using the knowledge you gained from that is still zero.
This reminds me of what the Death Stranding team presented at GDC regarding the pathfinding. They had LOTS of new issues for a game due to the unique nature of Death Stranding (obstacles, tripping, balancing) and they do an excellent job at outlining their approach.
I've been working on a video game and actually followed the article to implement astar.
If you have an inaccessible node, astar will indeed scan everything. But to get around this I only had to add a limit to the number of frontier iterations which was just a conditional.
I used to be obsessed with the programming game Screeps and read most of these blog posts when they were still published on the authors stanford pages- there's a lot of great stuff in there.
Screeps is fascinating but I never got motivated to play it. :(
I use the Stanford pages [1] to link to interesting papers and I use Red Blob Games to explore interactive ways of presenting topics. The most recent update to the Stanford pages is from 3 weeks ago, about any-angle pathfinding [2]. But most of what I do these days is on the Red Blob Games site. I probably would've kept using the Stanford pages but they have a 100MB quota limit and I was running out of space…
I have bookmarked many Red Blob Games posts like this one, both because of their excellent content, but also as examples of how to write truly great tutorials. Well organized content, good CSS without over-styling, advanced JavaScript but only used in the exact right places: interactive demos, toggles to customize the content more to your use case (e.g. hex vs. square), and not to hijack my scroll bar or for unnecessary flashiness. This site is my go-to for inspiration on how to write a fantastic tutorial.
A few years ago, I wondered whether it was possible to extend A* to pathfinding with momentum. It would require a consistent and admissible heuristic for Newtonian kinematics. It was a little tricky to find but it turns out that it does exist!
I've come across this link before, and it's actually really useful. I used it to help me implement A* myself for a project I was working on, which worked decently for my rather simple use-case.
Amit, your amazing man. I've come across this doing Advent of Code this year. You're the only reason I no longer fear DSA as a self taught programmer. If you're ever in Bangalore I owe you a drink.
I remember reading these a long time ago! Thanks Amit, really helped me break into the games industry. (I’ve since left but still enjoy game programming)
Introduction to the a* Algorithm - https://news.ycombinator.com/item?id=24146045 - August 2020 (1 comment)
Introduction to A* (2014) - https://news.ycombinator.com/item?id=18642462 - December 2018 (14 comments)
Introduction to A* - https://news.ycombinator.com/item?id=16190604 - January 2018 (0 comments)
Introduction to A* algorithm - https://news.ycombinator.com/item?id=10724098 - December 2015 (1 comment)
Introduction to A* - https://news.ycombinator.com/item?id=8059237 - July 2014 (28 comments)
Related threads:
Making of “Introduction to A*” - https://news.ycombinator.com/item?id=8445732 - October 2014 (12 comments)