Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Who builds a house without drawing blueprints? (2015) (acm.org)
19 points by mxschumacher on June 11, 2020 | hide | past | favorite | 47 comments


I don’t actually disagree with anything in the article, but comparisons between software and buildings - especially buildings - leave me cold.

There is almost nothing actually in common between software and buildings. You can’t move a house as a matter of course. You can’t modify a house in a few keystrokes. You can’t just copy a house or delete it. You don’t generally consider how a house will evolve over time. A house has a limited number of possible states, and a trivially low number of entrances and exits. The analogy is so poor that it’s difficult even to make analogies about how bad an analogy it is.

I have thought for a long time that building analogies are horribly ill suited to software. Perhaps better than nothing, but only barely. Software is a thing unto itself and I just find these analogies make we want to run away screaming.


As the article states, "However, metaphors can be misleading, and I do not claim that we should write specifications just because architects draw blueprints."

He's not arguing for specifications because architects draw them for builders. Rather, architects draw them for some of the same reasons that the author posits that software specifications should be written.

As it says elsewhere... "[A]s the cartoonist Guindon wrote: 'Writing is nature's way of letting you know how sloppy your thinking is.' We think in order to understand what we are doing. If we understand something, we can explain it clearly in writing. If we have not explained it in writing, then we do not know if we really understand it." That's why one should write specifications.


A written software specification can still be spectacularly wrong. Writing down a spec is no guarantee that you have avoided fuzzy and incomplete thinking or that you understand the problem you are trying to solve.


Also covered in the article.


Like I say, I don’t disagree with anything in the article. In fact I’d go further and say that the metaphor is not just misleading, but actually damaging.


I wonder how architects would go about their jobs differently if they could build a new house in a few seconds, run a few thousand unit tests on it and then tear it down again at almost zero cost.


They can in modelling tools.


Do they actually run unit tests over those? I've seen the VR modelers where you can "walk" through the builidng but those didn't really verify whether there was enough room in the walls for piping, light fall, floor strength limits, fire requirements etc.

I know at least one department in the Dutch navy was using constraint solvers to design a new submarine class but I haven't heard about a lot about house design. Not a home architect though, so maybe I just haven't kept up to the state of the art there.


> do they run unit tests on those

My initial glib answer is “no, the engineers are stuck to deal with the bad design afterwards” :)

My less glib answer would be that most disciplines of engineering do do modelling and “virtual testing”. I’ve been working on largish drones for about 18 months. Electronics get specced out, circuits get drawn and simulated, currents are estimated and components are specced based on those estimates, etc. On the mechanical side, parts are drawn in CAD, analyzed for proper fit and assemble-ability, FEA is used to determine whether a part is likely to be strong enough for the static and dynamic loads that will be placed on the aircraft, etc. All of this happens, generally, before circuit boards or modules get ordered or before the first chunk of aluminum goes into the CNC.

As far as building software goes, I’ve recently had to start digging into DO-178C, and I’m pretty impressed so far. Despite having been through engineering school (EE), I haven’t ever really had to put this level of rigour into practice for software before.

It feels slow and somewhat time consuming, but I’m feeling more and more confident that the module I’m working on is going to be pretty damned robust. It’s not a “certification required” unit, but it would be a huge pain in the ass if it failed and couldn’t recover without human intervention; as a result I’m doing more of a “DO-178C lite” with detailed low-level requirements stated explicitly, system architecture drawn out first, sequence diagrams for the complicated event-driven parts, state charts for the different parallel systems, etc.



And that's the thing, the code _is_ the blueprint. What is code if not a sufficiently detailed specification?


That has, until recently, been my philosophy for a long time. See my sibling comment for context.

Another example of where modelling both saved time and helped make a system more reliable: about 6 months ago, I took on a project that involved integrating a Bluetooth-Iridium gateway into an existing app, so that the app could work in areas with no wifi or cell connectivity. There’s a lot of things going on here: the Iridium modem has a limited queue size, transmissions are not instant, connectivity to the constellation fades in and out, Bluetooth randomly disconnects, etc.

My gut reaction is to slowly build it out. Start with the low-level BLE stuff and build layers on top. Which I did do to shake out the functionality. But before I went and built the whole thing, I stopped and tried to build a TLA+ model of the system. That modelling was only somewhat successful; based on the constraints applied by each part, I discovered that it would be impossible to make the entire system reliable. What the model did, though, was highlight specific failure cases that would need to be detected and reported to the user.

I agree that, in the end, the source code ends up being a detailed and precise specification of what the system does. What I’m realizing more and more though is that putting together more abstract models of the system ahead of time can save a ton of time by forcing you to think through cases that would (in my experience) end up with dirty hacks when they’re discovered after 90% of the code has been written.

It’s not the approach required for every project. I still love firing up a Python or Lisp REPL and hacking away. It’s way more fun! But for certain types of software, it’s very much not the appropriate way to work.


> That modelling was only somewhat successful; based on the constraints applied by each part, I discovered that it would be impossible to make the entire system reliable.

"I discovered I can't satisfy all my requirements at once and need to plan my tradeoffs in advance" sounds like a pretty successful modeling result to me!


You're right :). I sometimes look at that modelling exercise with disappointment; not because I didn't get any value out of it (I got a ton), but because I'm not yet good enough at TLA+ to have been able to successfully write a model that handled all the edge cases.


I like this reply, but my instinct is to ask, would you not have found out the same thing by doing the implementation? This is the first time I hear about TLA+ and I don't know how difficult it is to work with. Was it easier than doing the thing? It does sound interesting though. But I assume that there is quite a curve to learning something like it.

And, my nitpicky self also says "but that's modeling! Surely, architecture and modelling is not the same thing!". But again, my lack of insight might lead me astray.

I do agree, a little bit of thought beforehand is very useful. Usually, I do this by doing a sketch on paper beforehand, if the mental map is not clear enough, maybe do some simple flow chart that describes the core functionality, just to make sure I understand it well enough.


I wrote this case study on using TLA+ for a business project: https://medium.com/espark-engineering-blog/formal-methods-in.... This was a system we'd already spent a lot of time working on and consistently saw a direct correlation between making changes and new production fires. TLA+ caught issues with the planned changes before we actually implemented them.


And I would like to thank you for the material you've written. You, personally, played a huge role in my choice to look into and eventually standardize on TLA+ as my "distributed systems modelling" tool as well as getting me going on it. You rock, thank you!


> "would you not have found out the same thing by doing the implementation?"

Yes, although there's a couple of important points to that:

- A lot of it wouldn't have been discovered until late into the implementation. The project itself looked straightforward, and my estimates would have been brutally off if I hadn't done the modelling exercise ahead of time.

- Some of it would likely not have been discovered until after the app shipped. Rare edge cases. To give you context, it's an Android app that talks to an Iridium (satellite) modem over BLE. I basically modelled an abstract Android Activity lifecycle with extra events for user actions, a BLE stack (both sides of the connection), the Iridium connection, and the server on the other side of it. Each of these state machines were relatively straightforward to model. The insight came from letting TLA+ find paths through the five parallel state machines that got me into a weird state. (For an easy example, we make a request that successfully makes it out the Iridium modem to the server, the server replies, the response is delivered to the modem, and there's a corrupt BLE packet that terminates the connection.)

- You're right, architecture and modelling are different things. My flow is basically: 1) high-level requirements, 2) model of the system based on the HLRs, 3) low-level requirements derived from the HLRs and the insight gained from modelling, 4) architecture derived from 1,2,3.

- The amount of thought I'm going to put into a project ahead of time depends drastically on the project. The Iridium/BLE project? On the safety spectrum of 1 (mobile game) to 10 (flight critical software for a commercial jet), that project was about a 6-7. The software didn't have the ability to kill someone, but if it failed in the wrong way, they might believe someone was coming to rescue them when no one was coming.


Which blueprint? Engineers I talked to often mentioned that they'd make several different blueprints and different layers of detail. Is code the "high level" or the "low level" blueprints?


I think a blueprint is more like the source code, and a building is more like an instance of the software running in memory.


Even that comparison is bad, because a large part of the problems you might have with a house are not because of a bad blue print, but because it wasn't built properly.


Bad construction errors are almost entirely an issue of bad design. The more complex the roof line, the more likely you are to have leaks. Using multiple types of cladding is similarly inviting a host of issues. etc etc

There is still plenty of room to mess up the details, but that tends to be easier to ignore and cheaper to fix.


I agree that analogies and metaphors are always inaccurate and always misleading. And often pointless.

But:

> You don’t generally consider how a house will evolve over time. A house has a limited number of possible states

I don't believe either of these are true

> and a trivially low number of entrances and exits.

and I believe good software should strive to have a similarly low number

Obviously I'm nitpicking, but I don't believe buildings are necessarily worse than other terrible software analogies.


My point is really that we don’t use analogies to design houses and I think we should stop using analogies to design software.

To your first point, I’ve built a couple of houses, and I’ve never had to worry about how they would evolve or be modified by others after they were built.


As someone who's known a fair few architects, we unfortunately do use analogies to design houses. Sadly.

Glad to hear you don't when designing yours though.

> I’ve never had to worry about how they would evolve or be modified by others after they were built.

That seems unfortunate. Isn't that something you should want to consider (though you describe it as an undesirable, which seems a shortsighted view of the purpose of buildings).


How would you remove the third floor of a 5 story building? I'm sure its possible.


It's probably possible, but I'd say it might be slightly easier to remove the 5th floor. After which the 3rd & 4th could be.. you know.. evolved.

I've also more often seen space in buildings moved horizontally rather than vertically.


Yes, building analogies are terrible for coding. A much better analogy is writing.

Writing code a lot like writing. You do it by typing documents in an editor. You revise them. You might have a written outline first, just an outline in your head, or none at all. Revision is important. Before you publish, it is very helpful to have a few smart people you trust read your work over and tell you what's wrong with it, etc.


I've interviewed a bunch of people who used to professionally design buildings and now professionally write software. They have a lot more in common than you think.


That's interesting. Can you elaborate on where the similarities are? Thanks


Your phrasing suggests a comparison between 1. a finished building and 2. the process of making a piece of software.. of course there is no comparison.

If you look at a more apples to apples comparison, ie. designing (and in that I would include designing the process of building) a bespoke building. Designing a building is about structuring a building process, a business process, a living process, a manufacturing process, or public interface, a place of work, or life, or learning or wonder, all within or about the confines of a building.

Even simple bespoke buildings bring together hundreds of thousands or even millions of parts which have never been put together in this way or by these particular people before. You have to consider how all these materials fit together, behave together, how they degrade over time, their warranties, their performance, how they will get removed, replaced, disposed of. You have to consider how buildings will be modified, or inhabited or changed over time (because if you can't change it, it will be knocked down).

You have to account for the wildly differing preferences for taste, or comfort, or usage of buildings which your clients cannot describe and often have no awareness of. Your client is sometimes the end user, but most end users are not clients. You have no control over any of the important decision making processes, but lots of control over the unimportant ones. All of your attempts to standardise the process in order to contain timeframes result in an incremental lessening of quality or usefulness - sometimes to the effect that the finished product horribly underserves the end users (you often make the clients happy though). Clients also sometimes just refuse to pay.

Architects have been thinking about and writing about and experimenting with information gathering, processing and dissemination for (at least) hundreds of years. I mean, architects were knowledge workers before we even had paper to draw the blueprints on. Much of the early systems of standardisation and automation were driven by architecture (and fashion surely). Hell, the first 3D scanning of an object, a statue, was in the 15thC by an architect.[0] We have been using ticket systems ( rfis), bug reports (post occupation evaluation) and version control systems (transmittals and revisions) for longer than the software industry has been around. The reason for all the arcane processes in the industry is to try to capture, use and re-use information. To standardise processes across consultants, builders, clients, and manufacturers. Most architects are shit, we developed horrible hiring procedures to weed out the shit ones, that don't really work. They chronically underpay staff. They have missed just as many deadlines, and spent just as much money in cost blowouts as the software industry.

No. Designing buildings is almost exactly like designing software. It is just data processing and prediction after all.. and bullshit. Everything new is old. You can probably learn from their mistakes.

[0] https://www.smithsonianmag.com/arts-culture/digital-files-an...


I have huge respect for architects and civil engineers - and most other professions - and my comments are not meant to suggest that there’s nothing to learn from them. But software is not the same as buildings and we shouldn’t get too invested in the analogy.

Ultimately, most buildings get to a point where they are finished. Tenants move in and the building will stay structurally identical for At least the next 10 years. But a huge amount of software never gets to that point. There’s always a “next release”. Some releases change the structure of the software to fit in concepts that weren’t even considered when the original was built.

For some time now I’ve been using the phrase “software is more like a public garden than a building”. Gardens grow and evolve over time; gardens can quickly and easily be modified for different uses; there are complex dynamic interactions between the parts; users have broad latitude to do unexpected things with them; they even have bugs! (some of which occur long after it’s finished). To me, designing the first iteration of a software system is only one part of the problem. The real challenge is maintaining the “architecture” in the face of long term change. I’ve been wondering if software engineering is more like permaculture than architecture.

The problems in software come from the places where it’s different from anything else. So maybe I’m wrong to say that the building analogy is flawed. Perhaps it’s more accurate to say that it’s not a deep enough analogy to make a meaningful impact on software engineering.


Sorry, never saw your response. Agree, and often argue that architects should pay more attention to the fastpaced and generative lessons of software languages, processes and businesses. We should see architecture as less finished, because it rarely is. It always changes, it degrades, the business changes, the family changes, the city changes around it, it breaks it gets extended or fitout or rebuilt, and architects are only sometimes involved (rarely the same architects).

But just because architecture is slower (much slower), it doesn't mean it doesn't contain many of the same issues and challenges. But architecture as a process, the challenges of clients, user interface, communication in building or use, business interface, cost and time control, and not least the incremental effect of degradation and specialisation disintegrating the profession etc.. these have been well documented in our profession and are being experienced anew in software.

I love the garden analogy, architecture should also be more like gardens! We have this misunderstanding that architecture is permanent, but really it is just slow. Trees are nice, but we also need perennials and vegetables.. and mulch.


When building a house, there is a considerable difference in effort between making detailed construction drawing and building the house, so it makes sense to make a detailed construction drawing.

In software, there is effectively no difference between making a detailed design and actually writing the program, so it becomes a lot harder to know where to stop drawing and start building.

I guess that many of us have, at some point, thought we had come up with a good design, after spending lots of time on it, only to find that it stumbled on some detail when we actually tried to implement it. This makes it tempting to think "If I must work this out in detail anyway, I might as well do it in code. That way, if I get it right the first time around, I will be done!" Unfortunately, it also makes it tempting to stick with a design after it has proven to be a mistake, because of the sunk cost.

(edit: grammar.)


It does feel that a lot of buildings were designed with very little effort and are just a random arrangements of components and some patch work to make it passable. There are so many weird solutions, especially in newer buildings.

Far too few entrances and exits. Doors opening the wrong way, or extra doors or short stairs you have to pass through. Random looking window sizes and placements. Or no windows. Or no windows to the side with the best views, instead all facing a close by building. Having to walk around a lot for basic oft repeating functions - ie stairs or elevators that feel like they are installed backwards.

I did some hobbbyist 3d shooter level design in the past, and played a lot of custom maps. You could see the high level "flow" and vision of master designers when playing their maps or looking at them in the editor. Less experienced mappers might have some good individual ideas but the whole would usually not be so coherent.


I liked the article and tried to look into TLA+, but the documentation is off-putting at every turn. There is no HTML documentation, you need to download a ZIP of a book. Opening the ZIP, it's not just a single PDF, it's a bunch of PDFs (many a page or paragraph long) that the table of contents hyperlinks to. This makes impossible to skim the documentation.

Looking at some examples, it looks like TLA+ uses non-ASCII symbols which are then maybe mapped to ASCII? As such, you need to remember lots of obscure sigils, and I couldn't find a high-level intro of how it works.

It sounds very interesting, but the documentation could be made much more helpful. Lamport comments that "people nowadays don't read, so I made a video course", but I'm not surprised people don't read if reading is this hard. (I personally dislike video courses because you can't skim them and would much prefer HTML documentation).


Agreed about the TLA docs, but it is worth persisting, as TLA is very useful in some situations. These slides cover the basic syntax enough to get started.

https://www.slideshare.net/ScottWlaschin/tla-for-programmers...


Thank you, that's very useful.


I wrote an online guide designed to be a bit more accessible: https://www.learntla.com

(I stopped working on it to write a full book, but now that that's done I really need to get back and overhaul it. Still a useful resource IMO)


This is really well done, thank you. It's so well done that I'm reading just because it's so interesting, I was afraid my eyes would glaze over but I'm really engrossed by how you're qualifying the problem first.


> Architects draw detailed plans before a brick is laid or a nail is hammered. But few programmers write even a rough sketch of what their programs will do before they start coding.

Is the second sentence here true in any meaningful way?

Usually when I start on a personal project that is bigger than a oneliner or a script and more novel than a web page I've been thinking about it for weeks and have made a few sketches of the architecture and key data models and even then I start with a rough outline of the code ready to throw it away.

I don't think everyone does like me but it would surprise me if most professional programmers doesn't make some kind of sketch before starting on a project..?

Either I'm really unusually professional (I don't think so) or this article has a weak starting point.


The building analogy doesn't work because the situation is completely reversed. Building commercial software is much cheaper than designing it. Constructing a building is much more expensive than designing it.

Specifications are expensive and since designing software is risky the specification forces you to take the entire risk upfront. Lots of projects stop after a failed MVP to cut losses. You can't do that with a well thought out specification.

There are obviously well studied areas of computer science where the opposite is true. Compilers, databases and simulations benefit from a good design but this is precisely because of their well studied nature. The only way to get an edge in the commercial world is by doing something nobody else did before. That often means nobody, including the creator of the software, knows what the best design is.

This commonly results in an "idiotic market leader" effect where a product with obvious flaws (say mongodb) somehow manages to dominate a market.


My TLDR:

> 1h of up-front thinking about the problem and your solution can save you days of refactoring, testing, debugging. The author proposes writing specifications as good way of up-front thinking.

Authors own conclusion:

> There is nothing magical about specification. It will not eliminate all errors. It cannot catch coding errors; you will still have to test and debug to find them. (Language design and debugging tools have made great progress in catching coding errors, but they are not good for catching design errors.) And even a formal specification that has been proved to satisfy its required properties could be wrong if the requirements are incorrect. Thinking does not guarantee that you will not make mistakes. But not thinking guarantees that you will.


We can shorten it further. Dwight D. Eisenhower once said, "In preparing for battle I have always found that plans are useless but planning is indispensable."


Pro-tip: Do this with others. I have some colleagues that without fail are better then me at structuring my own thoughts and before writing a large amount of code I almost always explain my goals to them.



Just like with real estate agents, most times they oversell and charge more just because some new paint. Same with software, dozen of times I planned out a solution just to find that a component does not support a case or have a bug. Thankfully we can just "Fail Fast" in software. So we should use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: