One of the heuristics I implemented in my A, was to link information from the AI back to the routing grid. For example A will always pick the shortest route, however this isn't the best route. Imagine you have the AI pinned down in a building and the route finder picks a route out of the building, well it's likely to pick the shortest route (out the front door). So all the player has to do is put the cross hair over the door and keep shooting.
Now sure you could write some code to 'fix' this issue. Or you could let the problem fix itself. If when an AI pawn is killed at a certain point, I added a penalty to the surrounding nodes. This made the route more and more expensive over time, after a short while the AI decided to use the back door and walk around the building.
The important part of this is the player perception. To the player it looks like the AI is flanking the player, and they're like 'woooo cooool', which is the effect you're after.
That sounds like a weighted graph, with different factors contributing to the weighting. Kills, terrain type, etc. can all play into that to account for each unit type's strenths and weaknesses plus historical factors such as number of kills. If you want to get clever, you could implement kill weighting while taking into account whether or not information about those kills has been communicated through direct observation, message relaying, sound, and so on. If you get really clever, you could end up with enemies who remove evidence of their kills to maintain the element of surprise, who hunt down last survivors to prevent the information from being communicated, and other emergent behavior.
- increase the cost of traversing nodes that are within known enemy firing zones
- in some circumstances if the destination is unreachable it might be desirable to still compute a "best effort" path to the node that is the minimal heuristic distance from the unreachable destination
- in some circumstances if the destination is unreachable it might be desirable to still compute a "best effort" path to the node that is the minimal heuristic distance from the unreachable destination
This is actually very important, especially in RTS games! If you have a unit selected, and he's standing near the edge of a cliff, and you right-click off the edge (intersecting lower-elevation terrian), then the naive Astar algo would make the unit run away from your mouse cursor, because it would search the whole map for a valid way to get there --- however the player very likely wanted to get as close as possible to the edge, e.g. to attack an enemy he spotted.
(In other words, when pathing to different elevations, the the player almost always wants the unit to "run up as close as possible" to the edge, not try to find a valid path all the way around the map.)
It's actually a very tough/interesting problem, because it's probably impossible to "always do what the user intends in every situation" by merely looking at his mouseclicks.
As I am currently trying to implement the A* pathfinding algorithm in C++ this is both timely & very cool.
Most interesting is the selectable distance heuristic function (never thought about doing that in my own limited experience) which could mix things up a bit with a pathfinding game AI or similar.
Kudos for sharing this little gem :)
[edit]
implementing A* on my own for educational purposes, but here's a good reference implementation in C++ on google code:
If you are interested in a production quality open source pathfinding library take a look at Recast/Detour. It's used in a number of AAA game titles and is probably one of the most robust implementations out there:
http://code.google.com/p/recastnavigation/
Full disclosure: the author works with me on Tinkercad, a web based solid CAD.
Folks are saying here how choosing a right heuristics is important. That's right but there's another catch:
For something more real (ie. graph containing more edges) A* needs priority queue for openset to extract the node with minimum f() in log time at most; also note such structure needs to delete previously stored entry by node not by f(), which is not something priority queue has by default - another area to fine tune in order to avoid time explosion.
Hi, Matthew Trost (author) here. I'm actually a bit mortified that this found its way onto Hacker News, as now my admittedly bush-league JavaScript code is getting the brickbats. :-) In truth, I made it mainly as a practice exercise: code, UI, documentation .... So, please check out the links offered by others here, as many talented folks out there deserve attention for their far-more-sophisticated work!
In any case, I love this discussion. Programming is fun.
I have added in a few optimizations: in my first run at it (https://github.com/bgrins/javascript-astar/blob/master/astar...) I used a standard array for the open and closed lists, and I found it slowed down quite a bit as the graph size got larger. I switched the implementation to a binary heap, which made it much faster for larger graphs. Try the original demo then the new one at 100x100 to see what I mean :)
There are about a zillion implementations of vanilla A* on the internet. What's particularly interesting about this one? Do you have some cool new heuristic function or some nice insight into how the search works? The as-is version is pretty crappy. I mean, I can't even see the nodes expanded!
As far as I can tell there is nothing particularly interesting about this implementation. It's cool because it lets you play with it and because the code is pretty short which makes it easy for anyone unfamiliar with the algorithm to grok how it works. It also has some nice documentation: http://www.matthewtrost.org/projects/astar/documentation.htm...
Actually, I found the documentation reasonably poor. It does a good job of explaining the API but does little to provide a would-be user with any insights into why the search works, how it works or how efficient it is. It doesn't even discuss why (or when) I might choose one of the three given heuristic functions over the others.
i.e. despite its length, it's not actually a very useful document.
While I agree that what you listed would be useful to include I think it's important to note that it is documentation of his implementation (what the various parameters are for) and not of how A* works. To that end he did a good job. I especially like the little ideas he throws in under "Extending astar.js." He links to the Wikipedia entry for those who want to know more about how A* works.
Suppose the C++ STL documentation, such as it is, linked you to CLRS. Would you regard it as adequate or would you demand more? I don't expect a rigorous academic treatment here but at least something to tell me (a) how efficient the implementation is and (b) why (or when) I might want to use X or Y provided feature over Z. For example: why is the Euclidean heuristic "slow"? Is it a shitty implementation? Is it because it's underestimating the goal distance too much? Another gripe: the author doesn't even tell me what kind of open and closed list implementations are in use!
I don't mean to be a hater but this fails at contributing something novel (which it doesn't), it fails as a learning resource which teaches something interesting (which it doesn't) and it fails as a well documented programming hack (which it isn't).
Even if it's not the best implementation, it could be considered as scaffolding upon which to improve.
While you clearly have enough experience with A* to tell the difference between a good and bad implementation, I've not got the same background (I've been reading about it, looking at flash implementations and trying to figure out how the heck I was going to do it in JS+Canvas).
Maybe there's a short list of glaring inefficiencies or places where this implementation is behind the curve. What's it missing?
I'm sure that for the OP this is a reasonably significant achievement, to me it's voodoo... and to John Carmack it's something else.
I doubt Carmack would be so derisive of someone's professional development. Would you tell an 8 year old that their finger paintings were pretty crappy too?
Criticism can be valid and encouraging if delivered in the right way.
Perhaps we have different expectations of what we think should appear on Hacker News.
I come here to find out about cool new stuff that hackers elsewhere in the world are working on. To me, an individual learning a fundamental AI algorithm isn't all that interesting. Particularly as the OP in this case has nothing to add to the discussion; he's just crowing to the world: "Look! I implemented someone else's idea!"
That's retarded. 90% of the articles on the homepage at all times are garbage tech gossip. I don't see you insulting those authors. You're either a troll or just a jerk.
Wtf? This is a news site. How is somebody learning something fundamental news? Would you like to see a front-page story every time some kid completes an algorithms assignment? I don't. Not because the learning is unimportant, simply because it's most interesting to the person doing the learning.
All I'm saying is this: if you've learned something cool, and you want to tell the world about it, you need to present it in a way such that your learning contributes something to the learning of others. For example: explaining a complicated idea in really simple terms that anyone can understand (such as this cute example of the Halting Problem proof: http://www.lel.ed.ac.uk/~gpullum/loopsnoop.html).
Another way to make your learning interesting to other people is to present an implementation that provides insight into the subject. The author tried to do this but I argue his execution is poor. I learned nothing from his app and nothing from his documentation. No implementation insights, no analysis, no beautiful code, nothing. Nada. Zip. Zilch. It's just a web-based A* implementation. Of which there are zillions.
Excuse me? The study of better heuristics to guide search is a very active area of AI research. See, for example, Sturtevant et al's 2009 work on True Distance Heuristics or their 2011 work on compressed heuristic functions presented at AAAI last week.
No, it's more general. The central idea is to identify "interesting" nodes in explicit graphs which can be used to improve heuristic accuracy. You might even be able to do this for implicit graphs though that extension is not straightforward. I suspect you'd need some kind of pattern database approach.
Harabor and Grastien had a nice AAAI 2011 paper on speeding up heuristic search -- which is specific to 2D grids.
Exactly, my point was that you can't improve that much on heuristics for a 2D grid. The heuristics are already admissible and simple, pretty much all you can do is speed it up.
Well, just to put in a hopeful word here, for hundreds of years mathematicians had figured that the most efficient way to multiply square matrices was O(n^3). Yet today it's significantly less than that, even in practice.
Huh? The True Distance Heuristic stuff to which I refer can speed up search quite dramatically. The trick is to find a nice tradeoff between improved heuristic accuracy and memory overhead.
Basically: the less accurate AStar's heuristic function is, whatever the domain, the more nodes it has to expand to reach the goal. That's because with a poor heuristic the fringe is expanded in all directions rather than strictly in the direction of the goal. Manhattan and Octile are not great heuristics because they assume the path to the goal is always unblocked. So you end up exploring dead-ends and other areas that look promising but which the optimal path does not cross.
You can see the difference in performance by comparing different heuristic functions; for example the special case of AStar where h(n) = 0 (i.e. Dijkstra's algorithm) vs the Manhattan distance heuristic. The latter is more informed and the search proceeds faster.
The paper to which I refer builds a database of costs between every node and certain landmark points. Given such information you can now better estimate the distance between two points by computing a distance differential.
Bottom line: their search is several times faster than vanilla AStar using a plain-jane Octile or Manhattan heuristic. More generally, the more landmarks you use, the better the heuristic. However, with better performance comes the need to store more costs which can introduce a significant memory overhead.
The details are in the papers I mentioned if you want to know more.
I really like that visualization! It would be awesome to see that same visualization side by side with Dijkstra's algorithm (same as astar, but with fixed heuristics) to see how much better it can perform.
Thanks - if I get some time I may consider augmenting it - that visualisation was put together as a proof of concept and as my first <canvas> experiment - it needs some more work ^_^.
In the back of my mind, I seem to recall that path-finding A * algorithms like these actually aren't quite A * .
IIRC, this is a constrained version of A * under the assumption that the heuristic h() is not only admissible but also monotonic, whereas A * only requires admissibility. The additional monotonicity constraint guarantees that the g() values of previously visited nodes never change, right? Otherwise if you find a better g() path you may have to do a lot of costly book-keeping. I presume the book-keeping is not part of the algorithm on display here.
It's so strange, I was just playing with pathfinding algorithms in CoffeeScript this weekend and thought about putting something like this together, but you have beaten me to it :) Really nice work.
Now sure you could write some code to 'fix' this issue. Or you could let the problem fix itself. If when an AI pawn is killed at a certain point, I added a penalty to the surrounding nodes. This made the route more and more expensive over time, after a short while the AI decided to use the back door and walk around the building.
The important part of this is the player perception. To the player it looks like the AI is flanking the player, and they're like 'woooo cooool', which is the effect you're after.