Hacker News new | past | comments | ask | show | jobs | submit login
How to smooth and spread A* paths for an RTS (construct.net)
210 points by AshleysBrain on Jan 31, 2023 | hide | past | favorite | 105 comments



I just got into the weeds of this for a hobby game I'm working on.

What I've learned is that A* is largely deprecated these days. Modern games mostly use "navigation meshes" for "any angle" pathfinding to address what is otherwise an np complete problem. The idea is that you generate a set of vertices across your terrain and compute the path from there.

One of the current leading experts in this is Daniel Harabor. His papers are brilliant

http://harabor.net/data/presentations/gdc2019.pdf

https://harabor.net/daniel/index.php/pathfinding/


Maybe someone can clear something up for my understanding here.

Having implemented A*-style algorithms occasionally, I was under the impression that by "navmesh" people mean a planar vector structure that can then be navigated using, for example, A*. As opposed to a grid data structure consisting of cells that can then be navigated using a pathfinding algorithm. I always saw A* as a strategy to find a path in any graph, and I saw navmesh as an example of such a graph.

Now it seems people are defining navmeshes as both a data structure AND pathfinding strategy, and by the same token are likewise seeing A* as both. This seems really confusing to me.

Have I been using the lingo wrong all this time?


No I think you're right and that I was mistaken. The only navmesh implementations I've seen do not use Astar, with Astar only being used in grid structures. But now I see that was a coincidence.


I've worked on several AA and AAA games which use nav meshes and they've all used A* for search.

This is also how many game engines, including Unity, implement their NavMesh queries.


Astar can also be used for non-grid structures. It can actually be used for any graph traversal, including e.g. Google Maps Navigation type use cases, and is arguably even more suited to those problems than grid movement, since the lowest-cost path through a grid is often a very unnatural way to move through an open space, especially if you're using Manhattan distances.


A* is as valid on a navmesh as it is in a grid-based layout. A* just needs "points" (or each edge of a navmesh polygon) and the edges that connect them, how those points and edges are represented or exist in the world is entirely an implementation detail.


It's funny cause A* (a slight modification of Dijkstra's algorithm) is explicitly an algorithm that operates on graphs. Applying it to a "navmesh" is actually conceptually simpler than thinking of a grid-based game world as a big uniform graph.


In all games I've worked on nav meshes have been used. They are definitely the goto solution for NPC movement.

However, I did work on a number of popular open world racing games where a lot of NPCs were travelling across the map, often where no player was nearby. For these, as well as for visualization of travel routes on the map, we used a graph of the road network and A* with some simple modifications such as adding edges from the start and end point to the nearest points on the road network.


Note that a navmesh can be used exactly with A* (and also for a post-search path shortening pass, with for example http://digestingduck.blogspot.com/2010/03/simple-stupid-funn...)


You still run A*, it’s just on the nav mesh rather than on a grid.


Interesting, this was at the bottom of the slides:

> It’s not yet clear to what degree new algorithms like Anya and Polyanya can help improve the state-of-the-art in these areas.

Considering how Poly-Anya was sometimes slower than Anya (the non-nav-mesh version of path finding), it seems like the jury is still out as to whether this technique + nav-mesh is useful?

I could be completely wrong, I'm just looking at the results from the papers/slides you posted. It seems that A* still has relevance because it and modifications to it are still the fastest path-finding algorithms?


But if you want to navigate in a 3d space (flying/space games) navmeshes are useless and you have to roll your own spline(ish) 3d space navigation/obstacle avoidance system. (It's kinda weird that an engine as popular and massive as UE5 only has nav meshes.)


Yeah, both UE5 and Unity3D seem to rely heavily on nav meshes for their built in path-finding. In my experience, both are sub-optimal for many, many use cases but are convenient in that they apply to a bunch of "on the ground" topology that can be somewhat easily generated using a quick ortho camera project/ray-cast operation.

Maybe they use it cause it's quick and easy, not because it's the most optimal?


IIRC UE also still only has the (built-in) option for a static, same vector at every space&time point (gravitational) force field ?

Of course this does cover most of the cases, probably even for "space" games (which are often more akin to WW1/WW2 naval simulators).


Very nice paper, thanks.


If anyone curious about some historic examples here is the pathfinding code from the C&C/Red Alert games

>The path algorithm works by following a LOS path to the target. If it collides with an impassable spot, it uses an Edge following routine to get around it. The edge follower moves along the edge in a clockwise or counter clockwise fashion until finding the destination spot. The destination is determined by Find_Path. It is the first passable that can be reached (so it will handle the doughnut case, where there is a passable in the center of an unreachable area).

https://github.com/electronicarts/CnC_Remastered_Collection/...

https://github.com/electronicarts/CnC_Remastered_Collection/...

(on a side note the comments are incredibly detailed, considering this is from 1993-95)


I was only able to win Warcraft 3 by learning to cheese the heavy enemies like demons by forcing them to pathfind along walls festooned with archers. It was funny but it wasn’t really fun.

It was experiences like that which made be briefly consider a career trying to make better game AI.


I guess the problem here is that if you have a few rocks in your path it will go around each rock to the opposite side for each rock. Rather than going to the side of the first rock, walking past all of the rocks, then coming back to the opposite side.


> The paths are simplified to produce fewer nodes with gentler turns between them, which is helpful when units drive along them

It looks like a much more useful optimization would be to produce a simplified map that splits the "real" map into "connected regions" via chokepoints. Pathing on that simplified map should be orders of magnitude simpler than on the "real" one. That's the "high-level plan": go to region A, then to region B, then target is on region C.

Once you have a high-level plan, each unit only need to do the expensive pathing until their next chokepoint (or to the target, if they are on the same region).

This is especially relevant on dynamic maps that are continuously changing (e.g. enemy units blocking paths, or building structures, or fog-of-war that keeps being updated, or closing and opening doors). Re-calculating the high-level path every time for every unit, and throwing it away when there's a change, is too expensive. Much more economical to recalculate the high-level map once for everyone, and then do high-level plans + pathing to the next choke.

Rimworld has a video where they explained their Region system, which implements this: https://www.youtube.com/watch?v=RMBQn_sg7DA


Halo 2 devs also bragged publicly about how some of the AI is compiled into the map. The map understands things like “cover” and the enemies will seek out that information when reacting to the player, rather than evaluating the environment for themselves.

The map in Rimworld is destructible though, so that sort of technique doesn’t quite work.


> The map in Rimworld is destructible though, so that sort of technique doesn’t quite work.

It could just be updated. Terrain/building destruction events don't happen all that often in Rimworld, maps also aren't too big


Rimworld also has to update the pathing and information for all the buildings and usable by NPCs 'stuff' so is likely doing a lot of this already.


Running calculations are one thing, but for Halo 2 it was done at build time.


Well, if you can precalculate it, zero reasons not to. You could possibly also do it to somewhat destructible map, just recompute the area where destruction might happen and swap that part when destruction happens in game


I really liked it the Halo2 AI, it was sort of the high water mark in FPS AI as far as I saw. I probably just haven’t played whatever games are advancing it; I heard the FEAR series had especially good AI but I only played it a little bit.

One thing I really liked was how chatty the enemies were. I think they claimed the Grunts would “act smarter” when Elites were around to command them. No idea if that was just marketing fluff though. Hard to notice either way, because there’s the confounding factor that the Elites are just stronger enemies so those fights would naturally be harder.

I’ve always thought it would be cool if the developers would take a “no cheating” approach — model what every unit has observed and require them to make a noise in order to share that information. It would add another dimension of difficulty — “highly trained” units could be modeled as able to communicate more information with less noise (and then add some units that just use hand signals or if it is sci-fi, psychics that can give orders silently).


I never played FEAR, but ended up reading basically this whole page when it was posted here a while ago:

https://alumni.media.mit.edu/~jorkin/goap.html

Really interesting stuff.

It appears chronologically dated at this point, but I'm not sure that anyone's made substantial improvements, given the lack of investment into single player games.


Played FEAR and the sequel coop with a friend twice - once as each char. The AI is still amongst the best I've come across. Division 2 did alright but no better.

If I was trying to do AI for a game, I would suck down every link on that web page. Even if you think you can do better, this is a great bar to measure against.


Yet another example (with nice animations, and discussion about how to deal with non-contiguously pathable regions) :

Factorio Friday Facts #317 - New pathfinding algorithm :

https://factorio.com/blog/post/fff-317


That was really interesting, thank you.

Using the low level map for the heuristic on the high level one is neat. Making the high level algo resumable is also very neat. I still think they should be pathing “only to the next region” on the high level map.


This is what they did for the StarCraft pathfinding, but I can't back that up with the links I wanted to send. Also, StarCraft definitely uses the static map for long paths, units will get stuck if you block choke points with mobile units.


I've implemented similar systems for commercial games I worked on and it works really well in a lot of fairly tricky situations.


This is gold. The part about spreading the movement as to not create bottlenecks is something every RTS player has experienced to the point where some RTS' from big name developers ("Chilly") just make it so units can pass through each other.


One of the reason starcraft has such a high skill ceiling is that you do need to individually control the pathing of the units so as not to bottleneck.


All I want is for there to be a Starcraft 3 that doesn't have most of the micro in it, so that games can hinge more on tactics and strategy than rapid clicking. It's such a pain ... I feel like microing was fun in the 2000s but now it's just annoying.


I loved the original Brood War. But The Fun™ for me was always in the macro side of things. I wanted to give units large-scale orders, and have them figure it out.

The rest of the RTS world relentlessly pursued micro, of course. And the MOBAs that came of that are fine for what they are, but what they aren't is something I personally want to play.

It always surprised me that parts of the tech research tree in any RTS didn't seem to include smarter unit behavior as part of the stack. Paying in-game resources to off-load cognitive load is an interesting loop!

Partial credit is due to Majesty, which is the only indirect-control RTS I've come across. Old game, but still fun for what it was. I liked the idea.

https://store.steampowered.com/app/25990/Majesty_Gold_HD/


A focus on operations rather tactics is more something found in the Total Annihilation family of games : TA, Supreme Comnander (1) (Forged Alliance Forever), Planetary Annihilation, Zero-K/Spring, BAR/Spring...

And shout out to Achron which managed to weave freaking *time travel* into the gameplay !!


Thanks for the recommendation. I've never tried any of that series. Any in particular you suggest to start with?


Well, my personal preference is for Zero-K/Spring, because I actually like micro :

http://zero-k.info/

(It also has a single-player campaign where you slowly get to learn the more than 100 units and features.)

But BAR/Spring seems like it would fit you better (it's also the closest of them to TA, which is otherwise a bit old these days) :

https://www.beyondallreason.info/

(BAR used to stand for Balanced Annihilation Reloaded, while Zero-K used to be called Complete Annihilation.)


Thanks, I'll check them out!


I've never even looked at Starcraft 2 with it's "micro" reputation. I favour Supreme Commander. To be fair it is: Supreme Commander + Forged Alliance + FAF online community.

Persistent between games: Configurable building templates - one building command queues whatever you set in whatever arrangement, rotatable on placement.

Factory infinite repeating queues with waypointing including automated airlift ferrying for ground units if required.

Adjustable waypoints. Move, insert, remove waypoints without having to redo the whole set from scratch.

Plus I like the storage aspect of resource management and the titanic units. Not so sure I care much for nukes though...


In war, everything is simple but the simplest things are hard.

I feel that Brood War embodies this part of Clausewitz quite well.


There will always be marginal advantages to be had in micro-managing units I think.


Of course, but the game is more fun for the vast majority of players if it's not required for the vast majority of the skill population. Pro matches focus heavily on micro-heavier caster units and that's fine if it's still fun for the players that don't want to focus on that.


you don't have to micro in starcraft to get fun out of it - but you won't win in a multiplayer match against someone who does. Whether losing due to lack of skill is fun or not will depend on the person.

The single player mode (and custom maps mode) all do not really require micro'ing to be very fun.


And the skill in question ain't strategy.


RTS games (at least most of them) are always like that tho. The ability to micro-manage your units is the most foundamental skill for a competitive player.

I believe that's one of the reason LoL and Dota became more popular than any RTS.


Can't help but point out that, in SC2 specifically, micro is not really a fundamental skill at all. Macro-ing efficiently will win you the vast majority of fights until you reach the 90th percentile of players. It's only in these high level games that micro skills become the determining factors.


Though starcraft macro has a high ratio of fiddly clicking to strategy. Less so in SC2 but still significant. Good macro involves a lot of clicking buttons in many different places as soon as they become available, because you can't queue them.


But "Macro" is not Strategy either : Strategy is all about the "why" of declaring war in the first place.

You inevitably end up with strategic considerations in games with more than 2 teams (especially if the teams are not locked), but most RT"S" matches are not played in these conditions (probably because they are too fast for diplomacy, which is itself a huge chunk of strategy).

I guess "Macro" is Operations ?


Not necessary, you can dance all the micro you want, but if you play BAR against SA pushing that exponential eco curve, you will loose. Human skill and attention are linear.

A master of click work, will loose against the onslought of material of the Big E, in the long run.


I can get the point of your post in the broadest sense, however, it feel a little like trying to read a foreign language. So, just because I am trying to kill some time before my Motion Hearing starts, can you give me a breakdown of what you are actually referencing?


There are artifical strategy games limitations, by the User Interface, by technology and by the players themselves. Some games do not adhere to those limtations - like Beyond All Reason or Supreme Commander. It allows a experienced player, to grow a exponential economy and marshal the produced armys without a taxation on the limited, linear attention of the player or the Input Rate of some players. Meaning, the micro management gained tactial advantages, get swept away by a ever larger exponentially growing economy.

Its mostly visible in the pros vs joes matches of BAR Players, in which well managed exponential eco allows a recovery from impossible situations, marshalling exponential growth against several weaker, though elo strong players at the same time.

https://www.youtube.com/watch?v=QuAPHw3DwMY

Its like watching BIG O N^2 beating Big O lin N to pulp, but in a rts. Blink and you miss it though. Its the infrastructure bootstrapping itself, in the back of the base.


Logistical tactics? The Russian military was really hurt by the lack of this skill in the first week of the Ukraine invasion, lots of pathing problems.


How you administer your own actions/APM bandwidth as a player is itself a resource over which the player needs to strategize accordingly


The starcraft parlance is Micro vs Macro.


Age of Empires 2 also does this for units in a "unit group".


I wonder why the Eikonal equation (and variations) and level-set methods are so rarely discussed. They yield any-angle paths and can also take into account the kinematics of the vehicle (e.g. minimum turn radius). I used it to solve a pathfinding problem in robotics and it could perform parking maneuvers all by its own.

The resulting algorithm bears similarities to Dijkstra's or A* (i.e. it is a sort of value iteration, see the "fast marching method") but yields a scalar field, whose gradient will give the direction of the optimal path at any point.


I guess because while minimum turn radius is unavoidable in real life, most games prefer to minimize it and its consequences as much as possible, due to the explosion in complexity they create ?


In game AI, these are discussed often under various guises like "influence maps" and "Dijkstra maps".


Path finding is a tricky problem. I developed my own algorithm[0] for a city builder game (Metropolis 1998) I'm working on. I needed an efficient solution to finding _all_ shortest paths between two points, and after a few weeks of trudging through the mud, I ended up with a good one.

The algorithm can handle 100,000+ units path finding to their own unique destinations at 60 FPS. [edit: on a single CPU core]. The graph is processed once, and stores all possible paths (using near minimum storage)

Additionally, I can have additional preference weights in the graph so units can use paths that match their preferences.

Note: The video[0] no longer represents the look of my game, as I've moved on to isometric[1].

[0] https://www.youtube.com/watch?v=7q0l87hwmkI

[1] https://twitter.com/YesboxStudios


The path simplification technique in the post is called "shortcutting" in robotics. Here [1] is some approachable explanation, the original paper [2] is relatively digestible read as well, and has some other techniques in there (amongs thm, partial shortcutting, which I found extremely helpful in robotics path planning).

Regarding general multi agent path finding: There is a lot of literature around with respect to time optimal planning in MAPF both in robotics, and in adjacent fields.

[1] http://www.osrobotics.org/osr/planning/post_processing.html

[2] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...


This article is over committing the number of web workers and cores. You should not do n cores = n web workers, it should be at best n-2 web workers (or 1). The reason being this is a consumer device and the cores are absolutely not dedicated to just your app.


That and unless you are purely number-crunching, or the work done by the workers is bursty rather than relatively constant, you browser will want some CPU resource the main thread (and display updates) to be responsive.


The article does consider issues like these towards the end.


What problem does it create to use all cores?


It's for non-server machines which will be running other things like the user's browser. You won't get the same performance on the shared cores so sometimes it's better to just run on the free ones, and without checking/adjusting it's just easier to use n-2 cores.

In a production environment you do want to eliminate the other processes.


Thanks. What does n-2 cores mean?


n here is the number of cores

So subtract two. Leave two free.


You want to use better localized memory structures in faster cache, and you want to allow the use of the higher frequency/IPC cores instead of being thermally/memory bandwidth limited by all the cores being loaded at lower frequency all the time.


I feel like making this multithreaded is a bit overkill. Or at least it should have presented a situation with many more units.

I made an RTS in C++ a while ago and was able to build paths for many units per frame on a fairly large map. And even if I couldn't, there's many optimizations I would have considered before threading, like only building a partial path based on portals. Or building a "bad" path and simplifying it on future frames.

Here's a great A* references with a bunch of helpful tricks: http://theory.stanford.edu/~amitp/GameProgramming/


(Author here) I can imagine multithreading in C++ is a last resort. But with web workers it's guaranteed safe and relatively easy to do with message passing. So why not? Even using one thread lifts the performance overhead of pathfinding off the main game thread, which helps scale it up to 1000s of units, which I'm hoping to do!


Having excess processing power available is not an excuse for writing inefficient software.


> So why not

Because multithreading introduces a lot of complexity, making it inflexible and hard to scale. Of course you would never do it if the code was fast enough to not have to.

I never had to consider it, I could generate about 16 complete flow fields per frame on a 512x512 map with no quadtree optimization.

It sounds like a nice way to learn webworkers, but I think you're always better off hitting the perf bottleneck first, rather than trying to design around it early.


I'd imagine that's not the only thing that will be multithreaded.


Why not mention flow fields? They make for important movement improvements.


I've not heard of flow fields before! Do you know a good reference to read up on them?


Emerson's chapter from 2013's Game AI Pro acts as a decent overview from what I have heard.

[1] - http://www.gameaipro.com/GameAIPro/GameAIPro_Chapter23_Crowd...


They are actually pretty simple. Essentially you generate the Dijkstra values for an undirected graph (this can be a grid, navmesh, etc), then you create directed edges pointing from high values to low values. So a grid space of value 8 will point to its neighbors with values less than 8, etc.

All an agent has to do is query their current spot in the graph and it will return a vector that leads them to the next lowest cost. This is useful if you have lots of agents going to the same location.

https://www.youtube.com/watch?v=BHcQ4JCj27w

The description of this video has a lot of good resources. I made it when I was a much much worse programmer though so I wouldn't bother actually watching the video lol.


https://www.youtube.com/watch?v=lOYXUktahv8

No reading up, but somewhere there was a blog post by the programers and a reference to the paper


The basic sin here is doing pathfinding around a million square cells just because that happens to be the base of your rendering. You merge it all into convex polygons and suddenly the pathfinding is a pen&paper homework problem because theres about 5 of those left.


Most classic RTS games have you place buildings on a grid. The buildings block movement so are important for pathing.

You could merge large squares of open space as an optimization though.


And if you pathfind on the visibility graph of those polygons' vertices, your path is even optimal in Euclidean space. Granted, that's a slightly larger graph, but still much smaller than all the grid cells.

And actually, depending on the connectivity of the fine square grid, pathfinding on it is pretty suboptimal too, because you're only finding paths that are shortest in Manhattan- or 8-connected distance (barring some fast-marching/Eikonal thing). If your units are really restricted to those motions then that's correct, but if they can move continuously then you're losing a lot of diagonals. So that's another argument for polygons.



Hmm, I keep thinking this would be better handled with 3D position vectors (on a sparse 3D grid; 3D because it's 2D game coordinates + time): not every unit is at the same place at the same time. You could even apply the same "spread" strategy in time, to avoid units following too closely.

Of course, that's more complicated, but you get collision avoidance as well. Probably better coordination than real-world vehicles. One may get weird results though, such as two units using a completely different path because that saves them two seconds, which may not be a good strategy in an RTS.


You generally want your units as close together as possible during movement, as so when enemy attacks during it the most amount of your own units is within range to shoot it


It would be interesting to see a formation setting that dictated a desired interval between units as a protection against splash damage collateral affecting too many units at once.


I guess it all depends what kind of game you're making. The whole point of aoe damage is to hit multiple targets, having game to give option to just say "lol, no, you can't hit many units coz they are spread out automatically" makes the interaction shallower.

Which might be entirely fine if you want to make strategy focused on macro, not micro.

Some games have formations that are compact or spread that player can choose, some, like TW:Warhammer, have some units that are spread and some that are compact (in a game where you control only whole squads) and all of that have different drawbacks, compact ones are good for defending chokepoints, spread ones are harder to aoe etc.


> I guess it all depends what kind of game you're making. The whole point of aoe damage is to hit multiple targets, having game to give option to just say "lol, no, you can't hit many units coz they are spread out automatically" makes the interaction shallower.

That just becomes another aspect of the strategic and tactical trade-offs! As the commander do I want to space my forces out, reducing their vulnerability to artillery but potentially making it take longer for the column to bring a deciding volume of fire to bear once they reach the front, possibly feeding them piecemeal into the teeth of the enemy? Has the enemy favored more artillery versus less? Are these units needed in battle this instant?

Basically the classic Napoleonic dilemma of when to deploy the troops from column into line.

But yeah as you say, it totally depends on the game that is being made.


A bit late to this thread but it'd be remiss of me if I didn't share this slide on left 4 dead's AI and navigation algorithm:

The AI Systems of Left 4 Dead - Akamaihd.net https://steamcdn-a.akamaihd.net/apps/valve/2009/ai_systems_o...


Truth to be told, A* over navmeshes and path smoothing ('string pulling' by eliminating points with LoS) with adding occupancy/weight to individual triangles (or polygons? Not sure what's used here) is nothing particularly new... going some 25 years maybe?

I've not been in triple-A games for many years, but I'm not sure I'd choose navmeshes today for an RTS style game over a large terrain - they tend to be static in nature (without attaching a lot of additional data to them, projecting dynamic obstacles onto them, re-weighting all the polys all the time, while making sure that the resolution is fine enough for this purpose or coming up with some vector field on top of the larger ones.... but no one needs to save RAM these days, do they... it's not 32MB of Playstation 2 anymore).

Edit: forgot to add that once you realise that A* is 'open ended' and that you drive it via the heuristic estimate, that can be anything, it becomes much much more suited to 'strategic' path-finding than what's normally envisaged described as 'A-to-B' path finding.


Modeling each unit singly and finding a wholly bottom-up solution works, I guess, but I wonder if it's worth coming at it from a tactical level. You're the commander pushing entire divisions across the battlefield, and the individual units in the division would be trained to maneuver together. They're soldiers with a commander, not individual ants. Shouldn't the model reflect that?

I wonder if a better solution would be to treat the whole group of units as one mega-unit, pathfind for that, then if the path for the megaunit is some factor longer than the straight collision-ignoring route, split it up into 2-6 squad-units. Or split the megaunit up into however many squad-units it takes to have a maximum of a dozen units per squad, pathfind for the squads with some collision avoidance, then do a more fine-grained process of calculating a path for each unit from its current position relative to its squad's center to the same position relative to the next point on the squad's path.

I never play this sort of game so maybe this is routine now, I dunno.


In Age of Empires 2, units that are all selected together make a sort of formation, and then move as a unit. This is visually pleasing, especially by the standards of a turn-of-the-century RTS.

I wonder if they do something along these lines, or if they literally just treat the formation as a single unit and path for that?


> That did impose some limits on parallelism, but it will probably still make full use of CPUs with up to 6-8 cores, which seems to cover most devices anyway.

Probably true, but this seems like a massive resource utilisation for an RTS game.


Oh hey, I recall using Construct Classic as one of my earlier forays into game development back in the day. At the time, visual scripting just felt like a great way of getting into things and easily playing around.

Construct seems to have come a long way since.


We have! Construct 2 was a windows program that made HTML5 games, and now we've gone full circle and Construct 3 is an HTML5 program that makes HTML5 games :)


As an old man yells at ML, this is a lot more fun. A*, Minimax, Simulated Annealing etc.


Do you know of any examples of games using Simulated Annealing for pathfinding? I've had thoughts that it would provide an interesting 'big-step' granularity for unit pathing at large-ish goal scale, with other algorithms supplementing the more granular pathing. I think this may actually be me trying to combine object/goal selection by units and pathing into a consolidated decision point, though.


I personally only remember 8-queen puzzle. I tried google recently and it comes up with quite many applications. Basically anything that is likely to be solved by allowing free moving into wrong spot/position/thing to boost chances of being right and then at some point lowering the chance to form the solution. Well .. the name is self-described so I can't do it better than that.


Looks really interesting - would love to have a button that would enable me to either fork the project in construct3 to poke at it or just get to play the "game" and move units around and see it in action.


The game is hosted on Github here: https://github.com/AshleyScirra/CommandAndConstruct


Flow fields scale infinitely better than A* pathfinding, can be cached with areas/portals configured, etc.

(Hi tigerworks, bornemix here from the CT forums)


Supreme Commander 2 used Flowfield path finding.


And SupCom 2 has way better pathfinding than 1, which I believe is A* based and is absolutely, game-losingly terrible.


best i can tell, dragoons in starcraft 1 used A- paths. that is, A*, but completely f-ing random


SC1 pathing is mainly plagued by how it handles collisions/blocking.. and the poor choices they make to resolve in the face of temporary blockages (eg units) — their resolution strategy is basically wait a little, and route again. If it’s still blocked (which could be due to a different unit taking the blocking position after the original moved) it basically assumes it’s been blocked by a building or a unit in hold position, and finds any arbitrary new route to reach. And they won’t ever try to replan the path unless blocked again, so in a moving army the shortest path could be utterly wild in the exact turn they actually tried, and that wild path will simply be the plan until blocked/vcompleted.

Dragoons are particularly absurd, because their collision box changes as they walk, and they’re huge, so they block one turn and unblock the next, and block again, triggering the “full reroute” with even a trivial number of them despite congestion appearing pretty light.

I think this dumb collision resolution strategy would have been mostly fine if they had handled the ABA problem and waited longer in that scenario (or planned to replan for dramatic differences in route lengths after collision-rerouting)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: