Yeah seems like that should be possible, since at the end of the day WFC is a system of constraints, so you can add any other arbitrary constraints you want on top of it.
With the examples I posted above, since they wrap/loop in the time dimension, there is always some similarity/consistency. For example, in the one with the rabbits, if a rabbit dies, that means there will have to also be a birth to get back to the correct number of rabbits before it loops.
I used this algorithm to generate minigolf courses [1].
I used this repo [2].
The algorithm is literally just a constraint solver. I'm pretty sure it's pretty similar to the sudoku solver I wrote in prolog for a university course.
It's kinda pretentiously named and described, probably because its more academic to do that. It's just a constraint solver.
Fun stuff, but I struggled to get a lot of value out of using it for level gen. You get cool patterns, but levels need structure and intent to be interesting. Adding constraints to the algorithm becomes a big-oh nightmare and you end up with frequently unsolvable paths as the algorithm recurses.
The game Bad North used it to good effect, so depending on the game it may be a very useful tool in the toolbelt.
I wonder if it could be combined with a tool like Path of Exile uses. They draw very rudimentary maps (River here, N bridges, etc) and then the tooling generates the level around those constraints.
How slow does an algorithm have to be before it stops saving time on level design, realistically? Obviously if it straight up fails for complex sets of constraints that’s one thing but even running for a day, if it yields usable results that seems like a timesaver.
The creator Oskar Stålberg has a great overview talk of Wave Function Collapse and how he applied it to his previous game Bad North - it’s absolutely worth a watch! https://youtu.be/0bcZb-SsnrA
I've struggled with implementing my own version of this algorithm - so I know I don't understand it properly. I get lost because of the "quantum mechanics" aspect - I am kind of suspecting it's just an analogy for the actual algorithm (maybe it's not?)... but I'd love to see if this could be re-written without the "entropy" and "superposition"-type wording, just to see if I can finally make it click in my brain!
Specifically read the 'Least Entropy' section: It's broken down very simply:
Pick a random tile which is the 'least random', because (due to already having resolved the constraints we can), this is most likely to be a cell that won't cause problems (ie. unsatisfiable constraints) later.
> The heuristic of selecting the most constrained variable or equivalently the variable with minimum remaining values (MRV) is well known in constraint solving.
> Since there is more than one valid pattern for that location—or it would already have been set to zero entropy in the previous loop—one of those patterns needs to be chosen. One of the patterns is chosen with a random sample, weighted by the frequency that pattern appears in the input image.
> This implements Gumin’s secondary goal for local similarity: that patterns appear with a
similar distribution in the output as are found in the input [12].
I agree that the quantum mechanics terms are often more confusing than helpful here. To the point that I wrote a journal article to try to demystify how the algorithm works [1].
If you're familiar with constraint solving, "superposition" is just "the remaining possible choices in the domain" and "entropy" is just describing how to select the next node.
Academics don't make any money from sales of paywalled journal articles. If you click a link to a journal article and hit a paywall, it's usually because whoever shared the link has institutional access to the journal and forgot that the paywall was there (since it effectively isn't for them.)
I wondered why you would link to magazines a while ago, as they're often behind a paywall and the author doesn't get a cut as I understand. Maybe it's for the citations?
Do authors mind sharing the article for free next to the publication link?
I looked at the WFC recently ("Level generation and style enhancement - deep learning for game development overview", https://arxiv.org/abs/2107.07397).
Yes, it is a big inspiration, and there are numerous beautiful examples of its results. However, the QM wording is at best an extremely loose analogy (one that a hacker would use). If you want to compare it to other methods, see "WaveFunctionCollapse is constraint solving in the wild" https://dl.acm.org/doi/10.1145/3102071.3110566.
"WaveFunctionCollapse is constraint solving in the wild."
My problem with that was that the algorithm basically always ran into unsolvable states. The WFC solution to that is to just start again, but most WFC implementations use only few (<10) tiles. The Zelda map had several hundred, you can actually see the algorithm searching for valid solutions in the videos I linked.
the algorithm is much simpler than I thought it would be. It basically takes an image made of tiles (think the levels of an NES game) and it searches for a permutation of those tiles with the property that each tile is immediately surrounded by the same types of tiles as in the original image.
Love how the 3D images looks like they're taken straight out of Labyrinth, never mind M. C. Escher. Wonder what it would take to make those render more efficiently.
as someone who did game dev for a while, i sure hope this plays well with semi-hires photos(1000x1000) as it seems better than conventional algorithms for unique texturing
I did some experiments with creating looped animations a few years ago:
https://twitter.com/MattRix/status/979020989181890560
https://twitter.com/MattRix/status/872648369625325568
https://twitter.com/MattRix/status/871054734018453505