The general strategy of creating a differentiable representation of a problem and simply describing the constraints is pretty powerful. See also databases (allowing arbitrary knowledge storage to be a tightly integrated part of a larger ML problem), graph layouts (you can do _way_ better than something like graphviz if you add arbitrary differentiable characteristics as constraints in generation -- mixing and matching between the better parts of normal layout routines using your human intuition to say what's important about this graph in particular), ....
I am one of the authors, Nobuyuki Umetani from the University of Tokyo, who implemented all the codes. I am delighted to see so many good discussions here!
I fully understand that the quality of the result is low for the architectural standard. This research does not mean computing a perfect floor plan ready for architectural design. It's more like it gives the architect some rough design choices in the very early design stage.
This research presents a new shape representation (differentiable Voronoi) for solving layout problems. These challenging problems typically involve geometrical and topological constraints. Existing shape representations are classified mainly into explicit (mesh) and implicit (using an implicit function) representations. While mesh representation is challenging in handling topological change, implicit representation is problematic because the boundary is not explicitly defined. Thus, it is difficult to compute the loss function based on the boundary shape. Differential Voronoi representation has both implicit and explicit representation properties (the wall is defined explicitly while the distance function from sites determines the shape) and is suitable for optimizing geometry and topology together.
Thank you! I am happy that our research reaches such a large audience!!
Could you expand on the graph layout constraints? I'm working on something that expresses social system architecture and I want something better than force and yed's tooling was interesting but not opensource.
Sure! The core of the concept is just building up a library of differentiable constraints you might like to apply to a graph and then mixing and matching (including higher/lower weights on some of the constraints) according to the human intuition you have about the graph you'd like to display.
- Suppose you want the graph to roughly fit on a page, being neither too big nor too small. This is the basic force-directed graph constraint (edges pull nodes together, all pairs of nodes push each other apart). It's often sub-par by itself because it doesn't explicitly do anything interesting with labels or edges and because it tends to emphasize local clusters while letting the rest of the graph remain fairly busy. If you _want_ to spot clusters and don't care much about other edges (maybe in disease tracking for example), that might not be a bad final result, but it's more useful as a building block onto which you can tack on other behaviors.
- Suppose the graph has meaningful topology you want to highlight. E.g., you've done a topological level sort on a DAG and want nodes in the same level to be closer to each other than nodes in other levels. Add an error term pulling nodes in the same level together. The above force-directed constraint will then handle the rest of the layout.
- Laying out nodes is a bit different from laying out edges. If you model your edges as something curvy and differentiable (splines aren't a bad choice, though perhaps add a term minimizing wiggliness too), you can add an error term penalizing too much "busyness" in the graph, e.g. by saying that it's good if you have large regions of whitespace (add up the squared areas of all the connected regions created by your edges and the edges of your bounding box, then negate that to get an error term). This will tend to coalesce edges into meaningful clusters.
- You might want to add a "confusion" term to your edges. One I like is maximizing the _average_ (not total, otherwise you'll encourage crossings) cosine distance at edge crossings.
- Force directed layouts naturally minimize edge crossings, but you might want to add an explicit term for that if we're adding in other competing constraints.
- If you have any human intuition as to what's important in the graph (apologies, "social system architecture" isn't parsing in my brain as a cohesive concept, so I can't provide any concrete examples) then you can tag nodes (and/or edges) with an "importance" factor. The error term you'll likely want is some combination of higher repulsive forces between those nodes and the nodes in the rest of the graph (making the important information distinct), and weaker repulsive forces between the nodes in the other part of the graph (so they take up less space relative to the important structure). A fairly clean way to handle that is a repulsive force proportional to the largest importance squared divided by the smallest importance (or some monotonic function on the constituent large/small parts).
- Labels are something else you might want to handle well. As with the rest of this, there are lots of error terms you might choose. An easy one is adding a repulsive force from the edges of the label to the other nodes, an attractive force to the parts of the graph the labels should be labeling, and a repulsive force from the edges of the label to the graph edges.
- An important point is that you can represent basically all of the other interesting graph layouts as just some projected [0] gradient constraint being optimized. E.g., to get a hive plot [1] you impose a radial constraint on nodes according to their assigned groups and just use rules minimizing edge crossings, minimizing confusion at the existing crossings, and maximizing large chunks of whitespace in the graph to meaningfully cluster the edges (also something minimizing wiggliness if your edge description is complicated).
- And so on. This is quite long. Maybe a good general principle is that if you don't like something about a given layout, if you can quantify what you don't like then you can add that as an explicit error term and get a better graph.
[0] If you have "hard" constraints which can't be violated, replace gradient descent with projected gradient descent. Take a step according to the gradient, then "project" the parameter vector being optimized to the nearest location not violating those constraints (or, more simply, just project each node/edge/... individually, though convergence will likely be a bit slower).
Thanks for the detailed response, these are all very interesting constraints, but how would you convert this in an algorithm? It sounds a bit like simulated annealing? I'm aware of gradient descent algorithms used in machine learning, but how would you end up applying this to a graph layout problem?
Regarding social system architecture, I just mean system architecture, but applied to social systems.
> - Suppose the graph has meaningful topology you want to highlight. E.g., you've done a topological level sort on a DAG and want nodes in the same level to be closer to each other than nodes in other levels. Add an error term pulling nodes in the same level together. The above force-directed constraint will then handle the rest of the layout.
An example is like a loose graph of nodes, where we have 2 kinds of nodes, identity nodes and authority nodes. Identities are important - because they are publically discoverable. I want the identity nodes to be more important - and I guess that's where things like topological sort is interesting.
Me too. While I recall Calculus class, I’ve been paid to operate at the algebra level for years. I think this walkthrough is a retread of differential equations, in particular the kind that attempt to approximate the result of the function as it approaches the limit of a point. I get lost at the bits involving dimensionality. I think it’s evident but it’s been years since I’ve read mathematics at this level.
It took me a moment to realize you were using "Algebra" in a pejorative sense to mean a level of mathematics beneath Calculus, rather than as "The study of composable operators on quantitative spaces." Algebra is a very deep subject, no less advanced than Analysis (The branch of mathematics that includes Calculus).
Not really. I can find or write one if you're very interested.
- Gradients (slopes) are incredibly powerful computationally. If you can compute a function, you can compute its slope with almost no overhead. Relating to physical objects, imagine walking at the top of an igloo (where it's pretty flat and you won't lose your balance) vs the sides of an igloo (where it's vertical, and you'll fall over or otherwise break something). For real-world functions. you can ask where you are (top, bottom, middle, ... of the igloo) _and also_ what the slope is (flat, steep, ...) with the second step happening nearly for free.
- The whole "technique" is that when the things you want aren't being found, walk in the direction of least badness. Gradients define that direction, and since they're nearly free computationally you can abuse them as much as you want.
- The rest of the "technique" is figuring out how to describe ordinary comp-sci data structures as differentiable entities. E.g., the simplest way to differentiate most discrete objects (like database insertions) is to insert them probabilistically and apply your derivatives/slopes to the likelihood of the insertion succeeding.
I wish all explanations made complicated concepts this manageable!
> the simplest way to differentiate most discrete objects (like database insertions) is to insert them probabilistically and apply your derivatives/slopes to the likelihood of the insertion succeeding.
I hope it wouldn't trouble you to go a bit further with this idea? Namely, the idea of differentiating an object or database insertion sounds like it shouldn't be structurally possible.
And why would there be a probability function for a db insertion succeeding? How could it fail - if the db was full or if there are too many requests?
Imagine the particular data structure you'd like to use for your database has if/else branching logic. A simple example thereof is some kind of a BinaryTree<Key, Value> type. Till your termination condition, if the key is bigger go one direction, else go the other direction as you walk down the tree.
The problem you'll find is that the derivative of the error with respect to the _key_ is always zero. Intuitively, nudging the key just a little bit won't change the query result. Imagine some kind of a step function which is 0 for negative inputs and 1 for positive inputs. The slope at every input (except for something interesting happening at 0 -- not a "slope" mathematically, but for your real-world problem you might care about what happens there) is exactly zero.
Phrased another way, the optimization problem you'd like to solve via derivatives and gradients is discrete, requiring combinatorial techniques instead of gradient descent to solve -- at least, that's true if choosing the right database key is an important part of the problem.
If you're able to somehow make the composite algorithm tolerant of probabilistic answers, by far the easiest tweak you can make is to replace if/else logic with a probabilistic if/else. When the key is a lot bigger, almost certainly walk right in the tree. When it's a little bigger, you have a nontrivial chance of walking left instead. When it's a little smaller, you have a bigger chance of walking left and a nontrivial chance of walking right.
Doing so allows you to represent the output as `p * f(right) + (1 - p) * f(left)`, which is differentiable in `p`. There's a bit of nuance in how you get the proper gradient if you only actually take one path (basically, some of the time you'll take the right branch, some of the time you'll take the left, and you want to re-weight the gradient whenever you take a given branch so that if you made that decision many times and averaged the results you'd converge to the formula above), but since you have something differentiable in `p` you're able to propagate that information back to the key you used to derive the probability.
The space where I most see people wanting differentiable databases is in creating giant ML thingamabobs. In that case, the high dimensionality of the value being stored can work in your favor. E.g., you could query the DB a few times, record the probabilities of each path, and use those to do a weighted average of the values you found. High-probability paths will dominate the result, and low-probability paths might still mix in a little interesting information.
That technique would likely not work if you wanted to, e.g., just drop in a differentiable database into your favorite web app and try to follow the gradients to find the inputs causing a certain bug. You'd need something fancier, probably not based on trees at all (at least not explicitly; you might use a tree internally to represent the data, but the conceptual computation wouldn't have those if/else step-function cliffs).
So there will be some key given to the database. And the key either gets results from the right branch or the left (at decision points, as opposed to leaves which would be the result itself). But if you hardcode the branching condition to be something like a greater than or less than, it's not very useful in finding the best result for the key.
But if you have a probabilistic setup, then the key has some chance of retrieving data from the right and from the left. Then we have
{expected goodness = p * f(right) + (1 - p) * f(left)}.
Where f is some objective function that tells us the 'goodness' of the resulting data the key got from going right or left. If you differentiate the expected goodness with respect to p you get f(right) - f(left). Which is how much the expected goodness will change if I increase the probability of going right.
Suppose the derivative f(right) - f(left) was positive, so going right was better than going left, then I can write some script to change p positively so that next time when I get that key the probability of going right is higher. That way we can optimize p for given keys.
Very interesting! I hope I got everything right; where I couldn't understand, I used gpt to help me out (when it told me f was a 'goodness' function, I had a breakthrough lol). The discrete way feels like clean, understandable logic while the probabilistic way with changing p values feels really messy, but I can see intuitively why the latter gives better results.
- They are selecting only parts of the rich tapestry of what is meaningful and useful about an architectural space, which can also vary by culture and by person.
- They also don't tend to respond to regulation requirements which may be 'hard line' or 'guideline' depending on the place.
It's a complex world out there for good reason and it doesn't usually reduce well to a few key variables.
> They also don't tend to respond to regulation requirements which may be 'hard line' or 'guideline' depending on the place.
Not sure if you read much of the ones I posted, but they (and the latest version of that work stream in the author’s dissertation) definitely deal with accessibility requirements and so on.
That’s pretty much the entire point of tools like these - it’s trivially easy to subdivide a polygon, but non-trivially easy to do so while maintaining various constraints, like clearance sizes, connectivity, daylighting requirements, structural requirements, etc etc.
I disagree. The world is adding a Manhattan’s worth of buildings to the global building stock every 5-6 weeks or so. It’s insane. The vast majority of those buildings are not being designed by boutique architecture firms… there is definitely a need for better integration of analysis methods for the daylighting, energy usage, and embodied carbon impact of design decisions throughout the design-build process, but especially in early stage design, and especially when operating at scale. But that’s just my 2c
This is true but what does it add up to besides more features in Revit?
Similar forces across all professional industries. Every medical operations innovation also boils down to, wait for it to appear in EPIC for free. Why do free R&D for Autodesk and EPIC?
Big picture, sometimes people are saying too much construction (you). Other people are saying not enough. It’s complicated right?
Software entrepreneurs are in denial about how exceptional Figma was. Adobe is asleep at the wheel sure. But every other professional services company? I don’t know, it’s tough for Godot, Blender, Black Magic, EPIC competitors that no one has heard of… Then you listen to what CEOs who are paying attention actually say: marketing, monetization, sales, etc (John Riccitiello comes to mind).
Meanwhile Gensler, Dentsu, whatever are using the kinds of sophisticated analyses you’re talking about to sell post-construction support, it’s a little insincere. It’s just increasing cost to convince them to do a better job.
One thing’s for sure: there is a glut of buildings people do not want to buy and also a shortage of buildings people are willing to pay dearly for. Better massing isn’t going to solve that. You’re talking about stuff that goes into slide 30 in the pitch deck. The prince has already walked out of the room by then.
I think I agree with a lot of what you said to some extent, but not entirely
> Big picture, sometimes people are saying too much construction (you). Other people are saying not enough. It’s complicated right?
For what it’s worth, I’m not saying “too much construction” - I’m value-neutral on how much construction will happen over the next few decades and was just trying to state the fact that there will undeniably be a lot of it, for better or for worse.
> Why do free R&D for Autodesk and EPIC?
I mean I know people working on the sorts of tooling being discussed here, and yes some of them either go on to work at Autodesk or sell IP to Autodesk. A major path in academia is to try things out that are even farther at the bleeding edge than industry R&D, then go on to form startups (which yes often result in failure when they realize that production-grade development is significantly harder or that there is no market for it or, like you said, Autodesk folds it into a pre-existing product).
> This is true but what does it add up to besides more features in Revit?
You say that like it’s a bad thing. It might be a depressing realization for people who hope to change the world with their work, but even if just a handful of buildings end up taking advantage of a certain feature that results in a better column grid (eg), it can end up offsetting more carbon than the original researcher would by riding their bike into school for like, 20 years rather than driving. In fact, if you really care about maximizing your impact as a researcher, having your tool folded into an Autodesk product, even if they copied your methodology from the papers you published and you get 0 financial compensation of any kind, is kind of the ideal outcome (even if that makes you a chump). You should just go work for Autodesk if you are upset about being a chump.
> You’re talking about stuff that goes into slide 30 in the pitch deck. The prince has already walked out of the room by then.
While I love this last paragraph from a literary perspective, and it does resonate with me, I’m not entirely sold on the idea that the entirety of analytically informed design should be viewed through such a pessimistic lens. I agree that a lot of numerical analysis of building performance in the context of clients/sales/etc is just smoke and mirrors/dog+pony show stuff, but that doesn’t mean that it does not still have a real impact on the actual buildings being designed. The point of these tools is not to make better pitch decks, it’s to make better buildings (from a carbon perspective, from a user perspective). It’s not like the work just goes out the window.
Architecture outside of the academy has been pessimistic for decades.
If you want scientifically-driven interventions that could make a difference, create tools that let investors and laypeople completely bypass the pre-existing real estate economy. Make software that designs spec homes, make robots that build spec homes, make self-driving cars, make telepresence to eliminate commutes entirely, make chatbots that file permits and respond to nuisance neighbor lawsuits, make gasses that can turn deserts and tundras into temperate climates, make safer sedatives and painkillers to put smiles on gruesomely NIMBY boomer neighbors. Tall order, right? That's like saying, "Destroy California."
> create tools that let investors and laypeople completely bypass the pre-existing real estate economy
> Make software that designs spec homes,
But isn’t that effectively what the original projects in top level post and in my top level comment are aiming to do? Make it possible to rapidly and easily generate architecturally and structurally meaningful designs within a fully automated end-to-end pipeline (even if the whole pipeline isn’t in place yet)?
> make robots that build spec homes
Yep there are people that I know working on this or adjacent to this (ie rapidly constructable pre-fab homes/tiny homes, 3D printable homes using on-site dirt and mud, drones that assemble structures, etc etc).
> bypass the pre-existing real estate economy
At the same time, you have to be somewhat pragmatic and recognize that this is so entrenched that the likelihood of successfully disrupting and sidestepping is low and there is value still in tactically intervening within existing frameworks…
Out of curiosity what is your involvement (if any) in AEC? Just curious as it helps to understand each other’s perspectives. For context I am in academia (duh lol) but also at a decarbonization/retrofit planning startup which works with large portfolio owners (ie thousands of buildings at a time).
For sure. Read the author’s dissertation (should be posted on MIT DSpace by now or in the coming weeks) - it’s excellent. He just started teaching at Berkeley - definitely worth following his work. (Disclosure: I wrote all the Python API/web app/infra stuff in the repo above but not any of the underlying algorithms).
I'm no architect, but surely the precise details of the exterior walls are decided based on the floor plan, not the other way around? Seems odd to assume the walls are fixed before the floor plan has been determined.
Of course, the shape of the lot and other physical factors put general limitations on the bounds of the house, but filling the entire lot isn't usually the primary goal.
I’m not sure why you’d assume that rather than the inverse. The house is set to fit a certain number of square feet based on economic concerns (heating and cooling costs primarily), then the ordinances on setback and separation come into play, then the very clear rectangle that results is your starting point for interior planning.
Both are used. Hence, any good design should explicitly state what the goals are (ie. what we are optimizing for) before embarking on the design process.
Commercial real estate generally optimizes for profit, which means, floor space, however this may be subject to regulation and particularly significant cost constraints (eg. maximum height before alternate structural features required, maximum height of available raw materials, HVAC/insulation, site aspect, site topography).
High end residential real estate is perhaps the most interesting, because good residential architecture facilitates aesthetic concern and draws from the full palette of commercial architecture in addition to traditional methods while not being constrained by finance. Hence, very interesting results can sometimes be obtained from good architects who will consider factors such as foliage, natural audio, etc. which are often ~ignored or afterthoughts in commercial/industrial.
IMHO good and original design in adequately resourced contexts tends to be iterative and to consider all paths toward a solution, not only a preconceived approach and a waterfall solution.
In my experience in the arch industry this type of space planning is more used in large buildings with a lot of different spaces (think college buildings). Usually the building is already built but being "renovated". A residential house doesn't really have the need for this type of algorithmic design for < 10 rooms.
I was thinking the same thing, albeit if someone is remodeling a residential house that has a main floor with a limited space, but the family would like a bigger kitchen, or a smaller living room? I'm wondering if it could be applicable in that scenario, or would be way more overkill than necessary? Taking into account of course of external walls, headers and other necessary limitations that might rule out using something like this?
I kind of feel like its overkill, but I'm curious what it would come up with if given a pretty strict boundaries in terms of space and dimensions even on a very large, multi-million dollar house. What's the threshold for when it could be applicable, versus "Yeah, we don't need something this insane to do this."
I do agree, it would be completely applicable in large sprawling buildings are being renovated, but are looking at how they can utilize existing space better.
With typical brick and concrete structures there are many fixed, nonnegotiable walls and pillars: load-bearing ones that must be present at a precise location and aligned with their counterparts on higher and lower floors, or external ones that must respect layout limits (e.g. far enough from the street, leaving a sidewalk of suitable width, global limits of the area of the whole building) and aesthetic considerations (e.g. straight without random recesses, symmetrical repeated units). Flexible internal wall are a minority.
This is fascinating to me because I once tried to take a (vaguely) similar approach to generate a procedural city layout, taking a voronoi diagram, and then doing some modified flood fills to create buildings within the city while leavings streets.
It feels to me like their approach could be used for this as well, since there's of course nothing that requires it to only be used for generating floor plans.
Geez, as architecture these plans are absolutely horrible and produce unusable spaces. As an abstract math problem, it seems marginally useful, but I would not want to live in a place laid out by this algorithm.
As a non-architect, I would love if you could explain some of what makes the non-duck designs _so_ horrible. Some of them looked more than fine to me at first glance. It's also something that can be rerun over and over with little interaction in-between runs so one could generate a handful of designs starting from different seeds and get inspiration.
You generally want your rooms to be not just rectilinear, but rectangular after you fill in all built-in storage spaces. And storage spaces have to be sized appropriately to their intended purpose: you don't want three-foot-deep shelves. Another thing this optimizer doesn't take into account is corridors. It has the notion of room connections and area ratios, but area ratios are the wrong way to think about corridors: you want them to be as small as possible while still having at least a certain width.
The algorithm prefers rectangular rooms, you'd just need to adjust it a bit. The wall loss function minimizes wall length and tries to regularize angles, so it naturally converges to square-ish rooms:
> we compute the norm of the tangent edge vector, which is equivalent to the norm of the 2D normal direction. Minimizing this loss has the following two effects: simplifying the wall between two adjacent rooms by penalizing its length, and aligning the boundary to the coordinate axis by making the coordinates of the tangent vector sparse in 2D.
The optimizer also seems like it should handle corridors okay as long as the corridor area is set to something reasonable. A corridor is just a room that is allowed to be long; since the other rooms will try to take up relatively square spaces you should be left with a long connecting area.
> And storage spaces have to be sized appropriately to their intended purpose: you don't want three-foot-deep shelves.
Like you said, this is the same minimum/maximum width problem that makes corridors wonky. I think this is relatively easily solved, though. A "minimum width" constraint is really just a requirement that no voronoi site is within X distance of a wall. A shelf is a sub-area in a room where there must be 2 borders within X distance of a voronoi point. Things like furniture and kitchen islands are also basically represented like that, as constrained areas.
A simpler alternative to complex per-point constraints would be to have area constraints per control point- a bunch of single-point "rooms" inside the actual rooms. In the case of a corridor, since the voronoi cells tend towards a square, you just need to set that area to the minimum width and they should avoid shrinking below that width.
I guess that one gripe is that it's clear that they looked at this as a math problem, and didn't really take into account people that have the accumulated wisdom of laying out hundreds of spaces, the pitfalls that fall when doing that, and the complaints that only surface when you are embodying that space. This is a case of the classic engineering mindset of 'let me go into a field which I have zero practical experience with, then reduce everything to a simple math problem and ignore everything you think you know about doing this'.
No, they didn't. When an artist draws a picture of Manhattan, they draw steam coming up from the streets and no utility poles. They don't need to know anything about the buried power infrastructure or steam pipes. It would probably be worse if they did; the point is to communicate the conception of the space. It does not matter if steam is coming up on a street where it shouldn't; they drew it to say "this is manhattan", not to depict a component of the infrastructure that exists off-canvas. It does not matter if businessmen are walking around at 3pm when they should be at work if the point is to indicate that business is happening. The kayfabe is more important than the reality.
This is for a computer graphics conference, not an architectural conference. The point is to generate more-plausible interiors instead of copy-pasting the same layout. They are generating the feeling of more realistic spaces. You're the one coming in and saying that they don't know what they're doing, and trying to simplify an art into something you understand.
My critique in the parent comment is exclusively on architectural terms. For video games or other spaces that won't have any actual people living there, I don't have a dog on that fight.
Based on some of the floor plans I've seen, architects also seem to have developed a habit of designing weird floor plans somewhere after the year 2000. I don't know why they thought apartments could need corridors with diverging or converging walls in the hallways, or why rooms without 90 degree angles are a good idea, but their decisions don't seem that far off the floor plans this tool generates.
These outputs are far from perfect but unfortunately they're not unrealistic.
If you just look at the plans in the initial diagram. Plan C is the only one that makes marginally useful spaces, avoiding excesive corners and awkward leftover space that ends up without use. It also happens to be the more obvious solution to the problem.
I could see the value in generating a bunch of rooms roughly accounting for the space requirements, but that's usually done in a more abstract way before taking into consideration things like privacy, thoroughfares, building conventions, spans, noise and light access, wind patterns, and things of that nature.
It seems like if you already have a weird space, you can also maybe use this to minimize cuts while you’re adding carpet, or solve constraints about minimizing distance for shared water supply pipes that need to touch the sink and the shower, etc.
That said I actually might want a room shaped like a duck, provided I don’t have to put down the flooring =)
I understand that this might be out of scope of the project, but floor plan design is more convoluted than filling the space with some constraint.
Necessary things to respect:
- recrangular rooms: non-straight angles are hard to work with
- windows: unless this is a plan for the underground floor
- water/waste water supply/vent shaft: for multi-floor buildings cannot be moved
- personal needs: some people need lots of storage space, some a room to play videogames, some a huge kitchen with an isle. All of that must be reflected in a good floor plan
With that said, I feel like this project might be good for less serious application like procedural game design, but is too naive for real architectural use.
Windows and pipes/vents are just constraints. Windows are constraints to outside areas and pipes/vents/stairs are constraints between floors.
Rectangularity is enforced by the wall loss function, which is adjustable.
> personal needs: some people need lots of storage space, some a room to play videogames, some a huge kitchen with an isle.
That's... the point. You decide you want your kitchen to be 20% bigger and the floor plan readjusts itself without needing somebody to draw up a new plan.
> With that said, I feel like this project might be good for less serious application like procedural game design, but is too naive for real architectural use.
Given that this is a paper submission to the Pacific Graphics conference, yes. Architecture is not really the point. You could also point out some much more obvious architectural needs, like the fact that houses need an outside too.
I suspect this is all well understood by the authors. We do research one simple step at a time, then we sell it as "look at this amazing thing" because we like funding (which enables future research and eating food!)
The cells are the areas closest to the voronoi sites, the dots inside the cells. The sites are the outputs of the optimizer; it adjusts the sites until the constraints are met well. The number of sites is chosen by the user.
The excess sites will naturally be clustered together and weakly tend away from the interior walls. The optimizer prefers straighter walls, so it tries to cover the room in large cells (which have big straight edges) and then shove the rest of the points into a small area where they aren't affecting the internal walls. Thats why they tend to cluster into the center of internal rooms, or towards exterior corners where the wall is preset.
An FPS Death Match map, but every time a new match is started, the layouts of the rooms change somewhat. And maybe even like another commenter talked about, buildings themselves could be positioned using this technique too, so even the layout of the buildings and the roads change a bit every new match.
It’ll be familiar, but at the same time unfamiliar, every time a new match starts. Some strategies will be useful across matches, some strategies will not.
Game called 'Due Process' has procedurally generated levels, but they are based on tilesets - I think rooms generated the OP way wouldn't make sense and would be very disorienting.
Cool. In the 90ies I thought similar approaches at our university, and we had practical examples of such constraints. In general city planning was much easier than structural buildings, because gravity and building rules are much stricter than city planning riles, which are trivial.
We also had rules layed out graphically, and did a substitution process in the optimizations.
Neufert did help much more than Christopher Alexander.
This is cool, in part because it's similar to an idea I had awhile back but didn't know how to actually implement it.
The way I imagined it working was one Voronoi cell per room, but with a tweak where the Voronoi cells are weighted such that you can grow or shrink a Voronoi cell to fit whatever use you have for the room. (At which point it wouldn't really be a Voronoi diagram anymore, but I don't know if there's a name for this other thing. It ought to at least theoretically be possible to compute, because the way soap bubbles stack in 3 dimensions doesn't require all the soap bubbles to contain an exact quantity of air.)
You could make the case that that isn't really necessary, since you can adjust Voronoi cell size just through the strategic placement of its neighboring cells, but it seemed useful to have an extra axis of control.
The outer surfaces of the building could be given dome shapes, but it may be more aesthetically pleasing to give them the same angular surfaces as the interior walls by having "imaginary" cells bordering the outside that aren't part of the finished building but instead define its shape.
I'd imagine the building could either be 3d-printed, or it could be constructed out of flat wall panels that are made in a factory, shipped to the building site, and bolted together (or affixed some other way) at the edges.
The wall panels could also be shaped so that they form shelves or other usable surfaces out of the strange angles (especially the non-vertical walls).
Ideally you'd have some program that can endlessly generate floor plan variations based on user input and site geometry, and then lay out all the plumbing and wiring and verify that it meets structural constraints, and the whole design manufactured and assembled without any human designer in the loop at all. No two houses need ever be exactly alike, every major component manufactured on-demand from standard material inputs.
This would all be pretty hard to setup in reality, but it's how I imagine new construction in near-future science fiction cities, and eventually space habitats working. You spin up an O'Neil cylinder, and some program generates a layout to fill much of the interior with soap-bubble cities like massive mege-structure apartments, but without any repetition. Every neighborhood different than every other, maybe with some common stylistic elements to distinguish one part of town from another, but every location unique, like nature.
That's the reason architecture is closer to an art than hard science. The most optimal math answer usually sucks ass from a usability/look/&c. point of view