Hacker News new | past | comments | ask | show | jobs | submit login
Sequence diagrams, the only good thing UML brought to software development (mermaidchart.com)
623 points by knsv on June 15, 2023 | hide | past | favorite | 356 comments



I also find sequence diagrams to be the most useful, but disagree that the rest of UML is useless. Class, component, package, activity and state machine diagrams are all useful ways to model the structure and behavior of a system visually.

The only reason the other diagram types fell out of favor is because of the development methodology change starting in the early 2000s. The industry started rejecting Waterfall, early design and system architects, in favor of Agile, just-in-time design and empowering developers. So we saw no need for these visual design tools to model the entire system, since we ended up changing the design during the lifetime of the project anyway. The drawback of this, of course, is that with the Agile approach these diagrams never end up being made, so developers are left to assemble their own mental model of the system, which hurts the overall comprehension. Most developers IME actively reject these diagrams because they are quickly outdated, or require constant changes to keep up to date, which is true, but this is not unlike documentation, comments, and a myriad other things that needs to be synced with the code.

Yet sequence diagrams are useful in a wide variety of use cases, and let's face it, they're the easiest ones to comprehend, and are even understandable by a non-technical audience. In contrast to the other UML diagram types that have strange notations and the information is more densely packed.


> Class, component, package, activity and state machine diagrams are all useful ways to model the structure and behavior of a system visually.

I completely agree with you.

It's a good way for other people to present information, for me to look at.

I just won't do it myself.

It's not only me; and that's why it's dead.

I won't do it because the first thing that comes to mind is how it will go out of date in a month, and have to be maintained to stay accurate.

Maybe in the not-too-distant future we will have AI grokking large code bases and cranking out accurate, useful, UML diagrams out of it.

All those diagrams, when they are complete, correct and up-to-date, do convey what they are supposed to convey.


I've taken a throwaway approach to diagramming, where I'll produce them more or less on demand for a meeting or presentation, but not think of them as an enduring artifact. PlantUML is my friend here because I can knock out ugly but gets-the-point-across diagrams in 30 minutes before a meeting and check them into source control, so I can then take the bones of older diagrams and rework them for a fresh meeting.

I used to whiteboard for this, but that hasn't carried over well in the remote world. What I miss about whiteboarding though is that you can tell a story as you draw, so whoever's viewing can watch something unfold from a blank slate while I'm walking them through the history of whatever system we're describing. That said, I can make a PlantUML diagram much more correct than a whiteboard.

All that said I too would love for more of that to be automated increasingly through AI. And I suppose it should make sense conceptually, because for me the value of a crafted diagram over an automated one is that no one really wants to look at the insane ERD of an OLTP database or a production object model. They want the digestible high level vision of the important bits, or the bits that are relevant to the conversation taking place. So it's a summarization problem. How to get the right data to produce a correct summary is interesting--I'm sure if I look there's a dozen papers to read on a similar subject :).


That's a great approach that has worked for me too. I think plant is a great little system. Not only can you get your point over in a handful of lines of text but I have found out that it is quite easy to generate the plantuml programmatically. So instead of having to keep docs and code in sync you can end up generating your graphs in ci. The upside compared to big diagramming products is that plant requites no boilerplate and you can provably grep and sed most diagrams out of your codebase which is a quite low barrier to entry.


For whiteboarding remotely I use https://excalidraw.com. I use it exactly for your use case. I’ve told live stories with it.


excalidraw is great especially the paid product. One of the cases of a very good open core business plan. I hope they do well.


Thank you, I'm going to give that a try!


> Maybe in the not-too-distant future we will have AI grokking large code bases and cranking out accurate, useful, UML diagrams out of it.

> All those diagrams, when they are complete, correct and up-to-date, do convey what they are supposed to convey.

I spent a bit of time prototyping this recently. It's definitely possible. Rational Rose also had the capability to generate diagrams from code though. I don't remember how good it was at the task though. Was Rational Rose just a very bad implementation?

I find the hate in this chat strange because diagrams are incredibly useful when working through complex problems* and then conveying that information to other engineers. My experience over 20+ years is that a huge portion of engineers can't grok complex problems from code alone.

*Most of my projects are optimizing billion+ row databases, micro-service architectures, and various other scaling challenges.


The problem isn't generating UML from code, the problem is generating useful UML from code.

Where those boxes are in relation to each other matters. You can't just randomly throw boxes and lines on the page, you need to arrange them so that boxes that we can tell what things are related by how close they are to each other. Automatic UML doesn't capture that.

Sometime you have complexity in code that needs tobe hidden by default. I don't care about rare error cases most of the time, but automatic UML can't know what is the complexity needed for rare cases and what is complexity you need to show the junior on the first day.. related to this when i'm interested in one error path how do you hide the others?

Then the real killer of both. Next week there is a minor requirement change (new features we always knew were coming and planned for even), and now the code changed. UML doesn't follow that. Either you generate UML and have the above problems. Or you manually update UML in theory, but in practice just let it slide as the week after you know something else will change and you don't nees it today anyway.


> Or you manually update UML in theory, but in practice just let it slide as the week after you know something else will change and you don't nees it today anyway.

I've heard this argument many times, but how is it different from keeping documentation, comments, tests, or the issue tracker up to date?

They all require some discipline, but if the team finds value in any of these things, they would make an effort to keep them synchronized with the code.

Besides, I suggest not falling in the trap of having formal design documents early on in the project's lifetime. Rather start with informal diagrams and sketches, and once the design has mostly settled, switch to something like UML. This would mean it wouldn't require changes every other week.

As for automatically generating, and dynamically arranging diagrams, this is more up to the generating tool than UML. These tools are still stuck using decades old technology at this point AFAIA, but there's no reason that a smarter tool couldn't do the things you mention.


> The problem isn't generating UML from code, the problem is generating useful UML from code.

I would kill for a tool that could output class and sequence diagrams from a project or component. If the format is editable and tools can automatically layout/filter out components, all the better. Right now the main time sink in putting together diagrams is trying to express what's in the code. Once that's in place, we can prune stuff we don't need.


True, though nowadays I use PlantUML (https://plantuml.com/) which is a DSL using which we can create all kinds of UML diagrams. If any changes are to be made I just need to make the incremental changes using the DSL and regenerate the images. It has been very helpful.


surprisingly, for average crowds, who usually describe stuff in fluffy text, UML-like diagrams suddenly feel like going from VB5 to python3


I'll draw those diagrams... on a whiteboard, sans boxes (Tufte). Great for point-in-time communication, less useful as specifications.


People who say "UML is useless" basically say "diagrams are useless". Which obviously isn't true. The alternative to UML is everyone inventing their own "diagram language" when they want to visualize something.

Moreover, I think comparing different UML diagrams can also be enlightening for university students. E.g. state machine digrams look quite similar to activity diagrams, but the former emphasize states and the latter actions. And class diagrams let you learn the connection between class structures of OOP languages and database structures.

UML lets you visually learn the abstract higher level concepts without having to rely on irrelevant specifics of practical implementations, or just on dry theoretical text.


> everyone inventing their own "diagram language"

This is clearly better than UML. UML is full of shorthands that nobody remembers. That's worse than people making labelled custom diagrams.

Here's an example:

https://buck2.build/docs/concepts/concept_map/

Imagine how much worse that would be with UML arrows.


That concept map example is only a loose association of ideas, where the nodes aren't of the same type (e.g. event, state, class etc), or sorted into types, and apparently included quite arbitrarily. Mind maps are similarly loose. UML is for when you want more precise diagrams about a fixed subject matter.

I agree on the "shorthands", like empty/filled arrows, that people may not know. But there isn't much alternative to such shorthands other than leaving them out completely, which wouldn't be an advantage. In diagrams some information is either conveyed succinctly or not at all. The "alternative" is a block of text instead of a diagram.


That's heavy backpedaling from your earlier

> People who say "UML is useless" basically say "diagrams are useless".


Well, if you only allow concept maps and mind maps, then I guess, yes, you don't actually say (all) diagrams are useless. But you are pretty close.


Not exactly related, but this concept map looks amazingly similar to a semantic ontology. RDF/OWL may not be the easiest encoding, but is quite capable in organizing very large concept schemes.


UML as a projection is fine and a valuable tool.

Starting with UML to describe a set of classes or - worse yet - an entire system is lunacy and a massive red flag.


What's wrong with describing some classes first with a UML diagram?


Nothing wrong, but code is already formal enough that you don't need a formal visual language (UML is not just a bunch of diagrams, it's a formal language). So yeah, a bit of drawing might help to understand, a few ideas from UML might help, but it's not like it is super necessary.

As said, the sequence diagram is a real plus because sequence information is not very well expressed by code, so that diagram has much added value.

I personnaly find UML hard to use becaues if you want to communicate your ideas precisely, you have to know much of its formalism and most often, people who read your drawings don't master its intricacies, so communication quality is no good...


> code is already formal enough that you don't need a formal visual language

Code can be formal enough but in reality it is almost never formal enough. Many constraints can be expressed imperatively (eg. someone validates the number of related objects in the create view) and in unexpected places (across repositories and libraries and maybe not even in code but in some stored procedure). And then it is all in flux and can go away the moment you change your stack.

I don't know if UML is perfectly flexible (maybe?) but its sure benefit is a source of truth that does not depend on your implementation.


The fact that it doesn’t provide any benefit or make things particularly simpler than describing them in code and projecting them to UML if you need a visual diagram. Also, the fact that it is particularly counterproductive if I’m not coding in a pure class-based OOP language, since it presupposes that the solution is centered on class-based OOP.

There a diagrams that are useful before coding, but most of them aren’t part of UML, which is a product of the simultaneous highwater of class-based-OOP-is-everything and pre-Agile industrial software development methodologies, and (except for some isolated bits) poorly fit to anything else.


> Starting with UML to describe a set of classes or - worse yet - an entire system is lunacy and a massive red flag.

I don't think this is right or a well founded opinion. The whole point of any diagram language, including UML, is to provide the means to describe software projects in different views, including class diagrams but also component and system diagrams.

What do you think is the whole point of a modeling language?


> What do you think is the whole point of a modeling language?

Not building an entire system down to class level which is what it is all too often used for (in my experience exclusively by people who have never and cannot code).

It's inefficient, doesn't play well with Software Engineering (diff, version control etc..) and is a relic of the past.

Keep the classes in code, project that into a diagram for the few occasions that is actually needed, and stick to a simplified tool like C4 (https://c4model.com) for your architectural views. Sprinkle some sequence diagrams in too to help describe the key data flows (they're handy next to C4).

Why use C4 over UML? Because the vast majority of the audience don't even know what UML is. So keep it extremely simple and include a key / guide to any of the (few) shapes or colors you use so that it's completely self contained.

The vast majority of UML belongs to be put into the Indiana Jones storage facility along with other archaic ideas such as Function Points / Function Point Analysis.


> Not building an entire system down to class level (...)

That was never the point of modeling languages.

Your baseless assumption is even more absurd when you factor in the fact that modeling languages even express runtime behavior, componente, and packaging.

The main problem with these mindless attacks on modeling languages in general and UML in particular is that the bulk of their retractors clearly knows nothing nor has any direct contact with them, or how they are used in the real world. Their arguments are nothing more than poorly thought through strawmen. Your comment fits well.

> Why use C4 over UML?

See, this is a telltale sign you know nothing about what you're commenting on. Not only is C4 a modelling language like UML but it also covers the exact same responsibilities and features of UML. C4's main selling point is advocating for the representation multiple points of view, and UML is a standard specification to express all the points of view you see fit in a coherent and rational way. There is no C4 vs UML. They have both the exact same design goals and usecases.


I think you are making the mistake of assuming that because something is an important motivation for a class of things, it must also be a good use for every instance of that class.

Sometimes motivations are misguided, and sometimes instances aren’t well fit for major motivations of their broader class.


> (...) it must also be a good use for every instance of that class.

You, and OP, failed to present a single argument supporting the thesis that modeling languages in general, and UML in particular, are "lunacy and a massive red flag."

> Sometimes motivations are misguided (...)

Not only did you failed to support anything in the original anti-diagram rant, now you also tried to support generalizations with hypothetical corner cases involving misuses, which is an absurd argument to make.

So not only do you have zero substance for show with regards to the original anti-UML rant, you also resort to using vague, unsubstantiated strawmen.

This is to be expected. Like many discussions involving technical aspects, generally detractors come from positions of opinionated ignorance.


> You, and OP, failed to present a single argument supporting the thesis that modeling languages in general, and UML in particular, are “lunacy and a massive red flag.”

I failed to support that thesis because I never endorsed it. Pointing out a flaw in a counterargument is not endorsing the argument it is deployed against.


The nice thing about visual stuff is that it is self-explanatory. Instead of needing to read a book about UML first, maybe just use some prose in addition to your diagrams, and you don't need UML at all then. That's also more flexibel.


> it is self-explanatory

It most certainly is not.


The superior alternative is drawing a diagram on a piece of paper to exercise the mind and show to collaborators, and never setting foot anywhere close to the UML tarpit.


There are other types of diagrams beyond what UML formalizes. For example, Data Flow diagrams can be very helpful, but they don't exist in UML (Component, Composite Structure, and Activity diagrams all have similarities with Data Flow diagrams, but none are really a great fit).


> Most developers IME actively reject these diagrams because they are quickly outdated, or require constant changes to keep up to date

Plans are useless, but planning is indispensable. The act of diagramming things up front is useful to get you thinking about the problem space and come up with the outline of a solution. After that, keeping the initial design documents up-to-date is optional, and often might not be necessary.


I think about it the other way around, actually.

During the prototyping phase diagrams do help, but they're usually sketches written on paper or whiteboard. You don't want to waste time with tools and strict specifications to design perfect diagrams, mostly because the design will change frequently, and you don't want your tools getting in the way.

Later on, once the design has settled down and maybe once development has started, those initial diagrams are mostly worthless, but you _do_ want neat and professionally done design documents that describe the system. This allows you to share them with coworkers, and quickly onboard other developers to the project. Hopefully by then the design won't change frequently, which would make updating these a chore.


We also moved to framework heavy development where you write classes that plug into frameworks. So a POJO here and a POJO there… (everywhere a POJO) that are managed by a framework and exist only to extend the framework are marginally useful. A UML class diagram for a Spring Batch transformer that’s cobbled together from an existing CSV reader component, just isn’t that interesting or necessary.


> The only reason the other diagram types fell out of favor is because of the development methodology change starting in the early 2000s. The industry started rejecting Waterfall, early design and system architects, in favor of Agile, just-in-time design and empowering developers.

I dunno, whenever I heard people say they're doing Agile all I see is them doing Waterfall without documentation.


How is that worse than waterfall with outdated documentation which is what we had before.


"we are lazy assholes who can't keep docs up to date, therefore docs are useless"

Is essentially the argument i see over and over again. And every time i start a new project and that project has docs or uml or whatever I appreciate it. Even if it is a bit out of date, it's way better than nothing.

In my opinion, every project should have a readme file with instructions for running the project locally, and ideally also a high level visual representation of the system.

It should be separated into sub systems/modules for separate functionality and each module could have it's own set of readme and high level visuals. In addition, modules should have small coupling points, typically interfaces and these should have documentation comments describing what they do, inputs, outputs etc. APIs should have something like a swagger doc.


> "we are lazy assholes who can't keep docs up to date, therefore docs are useless"

Rather: managers do not highly value work on documentation, so many programmers care little about it. If writing good documentation was valued more by managers than implementing feature stories, the situation would be different. Incentives do matter.


We are the engineers. It is up to us to tell non technical managers what's important. They don't know shit, they're just there to handle the shit we don't want to handle, like talking to clients or upper management or whatever.

It's all just bad excuses for not doing work properly. Writing clean and well documented code very quickly becomes faster than writing a tangled undocumented mess.


It's not a question of laziness. People just need to use better tools to generate docs and fail CI if the implementation and docs diverge.

I used to hear people say the same thing about laziness about code style before reformatting/linting tools became standard.


> It's not a question of laziness, it's just that people don't do it so you need automated tools to do it for them

Sounds a lot like laziness to me. It may be more accurate to say it's because people don't care. I've seen projects full of typos. Ive seen projects where basically everything had been duplicated not once but twice because they wanted a slightly different version of the same website so instead of adding some configuration or otherwise finding a reasonable solution for it they just went and copied hundreds of files and prefixed their names before making a couple minor changes and leaving the mess for someone else to discover. No mention anywhere of the fact that this had been done, nor why.

These are the actions of people who don't give a shit. Whatever problems they create are someone else's problems. The way i see it, if you care about your work you'll make sure the code is readable, well documented and so on. If you don't, you won't.


Name me a tool that verifies UML and code are up to date in ci. Now lets add requirements that it is good UML.

Don't forget that some details shouldn't be in UML.


> Class, component, package, activity and state machine diagrams are all useful ways to model the structure and behavior of a system visually

They're not bad, but the C4 model is a much better approach to high level modelling (while you can still use class diagrams on the lowest level). https://c4model.com/


Class diagrams feel pointless to me, this information works better as code, and how do you draw a class diagram with more than 10 classes and keep it readable?


If you use class diagrams to show the "truth" you are in painland in my experience.

However I sometimes find them useful to extract key parts of the system and showing their interaction. Cutting away many attributes, many helper classes, many other things.

That can lead to pictures which are quick to grasp and then allow further digging based on code.


Maybe before you actually start writing code its nice to create some domain diagrams which you can convert to class diagrams. Saves a ton of time. And you have good discussions about the general high level workings of an architecture. Can be done on a whiteboard. Make some pics and start coding.


Assuming that you're using a modern IDE: it is an order of magnitude quicker to model the domain (classes + attributes) in code and project them as diagrams, say with graphvis.

High level boxes are great for thinking about systems but the key there is abstraction - minimal viable level of detail..


Dont you want to discuss and talk about a design before you even touch a keyboard? In my projects we always discus high level overviews with pen and whiteboards. Nobody should even dare to touch a keyboard.


Absolutely, I pretty much always start with design sessions on a whiteboard (or if remote a call using draw.io).

That discussion is almost never going down the the level of an individual class and NEVER adheres to the UML standard.

Very occasionally I've had discussions about class structure by drawing class name + methods on a whiteboard but having the same discussion using Visual Studio's UML projection was just as effective.


this phrase caught my eye, "minimally viable level of detail", to which I immediately thought "minimal detail for a coherent thought". That's a cool concept actually I might hold on to and develop.


If you're creating diagrams with graphvis, you're just coding on another language. The benefits of diagrams appear when you draw them by hand (or graphically by mouse).


I generate them: initially with my own generator (which is copyright an employer from > 15 years ago), on a later project I found that Doxygen can do do that for you out of the box :-)

For interactive discussion I just use Visual Studio's projection, for databases I use the SQL Server Management Tool ERD projector or pgAdmin if I'm using Postgres.


The point is that you're generating diagrams from code that's already been written, you're not modelling, do you? ;-)

Those are a different kind of diagram than the one you create from scratch to decide what goes in the program.


I think class diagrams are a good way to visually display classes and hierarchy. If I have a package or module that has 10 classes and I want to show inheritance, properties, and methods I think it’s easier to show the class diagram then hand someone 10+ source files.

Also, class diagrams are more useful when you can’t give out source.

That being said, I think you can autogenerate class diagrams from code so it’s not like you should spend a lot of time making them.


UML was (and still is) really useful when embarking on a greenfield design. Most people I've worked with know some basic UML notation, so when it comes to whiteboarding and refining the design, it makes sense to go with that.

In terms of artifacts for future maintainers? Maybe not as helpful. As previous posters have mentioned, the diagrams go stale very quickly if the engineering department isn't disciplined about keeping documentation up to date. But this is true for any documentation.

Some diagrams (class and activity, in particular) are less valuable as time passes. I can get the idea of the class structure by just looking at the code itself. The class diagram just ends up being a stale representation of the code.

It is unfortunate that UML has developed a reputation as being overly complex. Now I see more ad-hoc diagrams being created, with whatever notation/symbols make sense to author, rather than using a common diagram system that can be read by many.


> The industry started rejecting Waterfall, early design and system architects, in favor of Agile, just-in-time design and empowering developers. So we saw no need for these visual design tools to model the entire system, since we ended up changing the design during the lifetime of the project anyway.

It wasn't also visual design tools that Agile killed. Agile killed design documentation altogether.

With Agile, today's problems are addressed on tomorrow's sprint, and changes in software architecture take place only in the code. Any effort to document the system architecture in any remotely rigorous way is quickly deprecated, and teams simply don't waste time maintaining something that might already be deprecated once they finish editing the document.


Does anyone else find the agile way of doing things just ends up with lots of increments and no big picture of how the system should be or where it should be going? There also seems to be a with with waterfall that it's a one way process and you can't do iterations.


Maybe UML can find a new life with ChatGPT. Maybe it can fulfill the promise of generating code out of diagrams. UML seems like a fine way to organize prompts.


Excellent comment and while maybe not UML, something like it is likely to emerge for exactly that purpose.


Sequence diagrams are the only bit of UML that also applies to distributed systems. And everything today is a distributed system.


to me seq diagrams are .. too sequential, my brain needs a protocol / graph proof like formalism to see things more globally.


Spot on, I agree that sequence diagrams are super useful, I see them used all the time in FAANG.

I do really wonder why UML is still taught in universities, as the article states, it's pretty useless. I took a masters Software Engineering course at Georgia Tech two years ago and a big part of the class was learning UML. That time was mostly wasted as I've never used any of it and never met anyone who has used it.

It wasn't my first time learning it either, we also had a section on it in my undergrad software engineering course. So I learned the same useless stuff twice.


Why the university teaches outdated useless stuff? My guesses:

- For the university, it fills out offerings and takes up credit hours, keeps the tuition dollars flowing

- For the teacher, it's something they already know how to teach, so it doesn't require nearly as much effort to teach as something more useful but maybe less familiar

- Universities are trusted with the decisions of what to teach and don't face much short-term accountability, so there's no real downside to teaching a useless course for another year

- They probably don't know it's useless. (They also don't care to find out because of the aforementioned points)


My graduate program did not teach UML, but I did learn it during my undergrad program. It was a relatively small part of the major software engineering course. It introduced the idea of formally specifying software, and it forced me to reestablish, visualize, and otherwise integrate what I was simultaneously learning about things like interfaces and inheritance. It was presented as an educational tool and not at all that we would be using it in industry. Far from useless or out of date in an educational context.


There's a bureaucratic reason too. Changing curricula is not fast and can take years (depending on the institution of course).

My department profs actually got in a bit of trouble with the university because they took the course that they should use for undergrad/graduate mentoring (something like a "special studies in XXX" placeholder, usually for independent or small group study that was special enough for course credit) and used it to create their own courses outside the review of approving a new course with a distinct number and credit count.

The reason they did it in the first place was because if they wanted to change the curricula for the existing courses, remove course numbers, or create new ones, the bureaucracy would take 4-8 semesters to get approval and complete. By which time some of the material was obsolete. One of the profs got fired and the rest quit, eventually.


"Useless" is a strong word. Definitely a bit less useful than intended though.

Before UML, it's hard to capture the state of corporate software development that allowed the insanity to take part. I mean, UML was the marriage of two different approaches to drawing object models that were locked in a battle: OMT and Booch method. There weren't tons of open forums for discussion and debate like the internet has now, there were conferences and such and these guys were basically trying to create formal methods for objects in a vacuum.

It was kind of existential stuff for a lot of the smaller players in the industry, everyone saw value in this newer approach to building software. "Reusable components" seemed huge. Tooling was expensive, training was expensive. Microsoft was moving as a scary rate, connect your cart to the wrong horse and it could cost you the company... On some of the usenet forums, about the most open discussion there was at the time, I read debates about the virtue of C++ style multiple inheritance vs single inheritance and there were product matrices for programming tools that had check boxes for crap like that. C++ and CLOS both supported multiple inheritance so to the casual observer they were "better." Now I've never seen serious industrial software written in CLOS or anyone even considering it but it "had the features." It was just a different and crazy time, kind of amazing how open source/free/libre has altered things, the entire culture of building software is different and probably more healthy.

Anyone want to shit on design patterns next?


I'd offer another, possible reason:

- University professors who remain exclusively in academia missed the rising tide and teach what they know


I’m on the advisory board of my school’s comp sci department. Each of us advisors has our own experience and perspectives on what useful things the students should learn. Sometimes I’m arguing that no, they probably don’t need to learn RPG, just because that’s what one of my colleagues sees a lot in their branch of industry. In turn, they argue that some of my recommendations are more useful at SF tech startups than in long-term positions in the companies local to the school.

Without those various perspectives, you end up with students learning all kinds of goofy things just because no one said, nah, they’re probably not going to need that.


Isn't UML about conceptual thinking first, before you loose yourself in editing?


That was my take on it. How do you get students to practice the process of thinking conceptually first, then evaluate how thorough the planning is, without some tool like UML? Its one thing to lecture your students on the necessity of planning ahead. Its quite another to evaluate whether or not they know how to plan ahead.

Granted, UML is not used hardly anywhere, but I must've learned 30 different specific software tools in college that I never used outside of college. However, I've used something like each one of them. I took a technical drawing class in high school and I still use some of the techniques I learned in that, even though absolutely no one uses t-squares, triangles, and actual paper in modern technical drawing today.

There's a fair argument re: how much planning and conceptualizing should be done ahead of starting the Agile process, and also a fair argument re: what that planning should look like (crude flowchart? UML-compliant class diagram?). But in rejecting the UML tool, are we rejecting also the idea of advanced planning too? Like, how completely do you need to reject advance planning that it takes the Agile loop to reveal that the customer actually needs software that uses an observer pattern?


It exists, you will see it, and when that happens you will be expected to understand what is there.

So it is well worth 1 hour or 2 to look at it.


Every software engineering team in my 20+ years career actually used 2-5 types of UML diagrams: classes, sequences, deployment, activity, state. I think it mostly depends on maturity of the team and engineering culture, whether UML is used or not. There’s certainly some value in it.


I don't think I have ever seen anybody actually getting value from a class diagram. I have seen many people creating them, but they are always useless.

I also don't think the UML variant of state machines and workflow are any popular. But well, there is probably somewhere where people use them. Also, those lists usually ignore entity-relationship diagrams, where UML just adopted the popular format (without even minor changes, like for workflows), and thus everybody uses the UML one.

But yes, people don't remember about deployment diagrams. Those are used a lot.


> I have seen many people creating them, but they are always useless.

Not every artifact is worth keeping, instead the value can come from the process of creation as a way to structure your thoughts or to explain or brainstorm possible solution. The value of class diagrams for those people is in the moment and it’s ok to throw them away later if they become useless.


Class diagrams are a good way to detect circular dependencies.


If your class diagram is comprehensive enough to get circular dependencies, then I'm completely sure¹ nobody will use them for anything.

Computers are much better at this kind of checks anyway, and they don't need diagrams.

1 - Instead of "well, maybe it's possible despite I never seeing it"


> I think it mostly depends on maturity of the team and engineering culture, whether UML is used or not.

I suspect that's true: teams with a very immature engineering culture probably do use a lot more UML. It's a good way to feel like you're doing something useful instead of actually doing the hard part. Mature teams write code.


Part of engineering culture in general, not just software engineering, is ability to share knowledge though documentation and best practices. If you share the source code, you communicate your solution on a very low level. It is often necessary to zoom out to see the big picture and there visualization helps. You can create ad-hoc diagrams, but their expressive power is low: without a convention it’s basically just space, text, arrows and generic shapes. To increase the expressive power you need a visual language in which different shapes and lines have some semantics. And here comes UML and other diagram languages. If you are not using them, it is likely that you are not communicating efficiently. Can a mature team fail at communication? I leave the answer to you.


UML didn't help though because it is never kept up to date with what the code really is


How was it supposed to keep up? The most popular programming languages did not introduce concepts that are too hard to reflect with UML. Or do you mean the problem of maintaining documentation?


Maintaining documentation. UML cannot go even one developer-week without being updated, but the developer will often not do that. By the time your UML is a month out of date it is useless at best, and misleading at worst.


This has nothing to do with UML. Ad-hoc diagrams or plain text descriptions become outdated at the same speed, which depends only on the level of detail that you put in documentation and not on the format if it. If you cannot keep up with the changes in the code, you are choosing the wrong level of detail, that’s it.


Ad hoc diagrams are much more likely to be tossed in the trash can after a week. If you keep them longer than the same problems apply. If you only keep them for a week than ad hoc is good enough as everyone still remembers what the symbols mean from when you created it, and the next time it won't matter that the symbols mean something different


I agree with everything you said until you got to UML diagrams. The problem with UML diagrams is that they communicate exactly the same thing that the code should be communicating. Class diagrams are not higher-level than the code they describe. Instead, communicate high-level intent and interactions.


> Mature teams write code.

Sounds very Dunning-Krugerish “smart people like me do X and not smart people do Y”

Coding and documenting aren’t exclusive. Modeling is part of documenting. UML is a type of modeling.

I would think effective teams will want to build good software. And to build good software they’ll want to capture requirements, communicate with other teams, and test their software. Design helps with this and is part of coding.

At the end of a successful day, I should have some updated models, some code, and some running software. If someone wants to see what I did, they’d likely read through different parts.

Just like trying to figure out a stage by just reading models sucks, so does trying to figure it out by just running the software.


UML is not documentation, it's code for people who can't read code. UML diagrams offer no additional information to someone who already can read code (and write readable code). They are training wheels. Mature bike riders don't use training wheels. Mature teams don't write UML -- they write code instead.

Other forms of documentation have their own tradeoffs, but I was not discussing those. You're the one who automatically equated UML with all possible documentation.

> to build good software they’ll want to capture requirements, communicate with other teams, and test their software

UML is awful at all of this.

> At the end of a successful day, I should have ...

> “smart people like me do X and not smart people do Y”

You don't see the irony here?


Mature teams document their code


With the odd caveat that you probably use sketches of these diagrams? As soon as you are trying to cram in all of the extra details stuff like class diagrams can do, you are probably wasting time.


I do not understand your question. If I do not deviate from convention but omit unimportant details, is it a sketch for you?

If you have a project document template from PMBoK, do you feel obliged to fill all the sections or you document only information relevant to your project? With UML it’s the same: it offers a lot, but you do not have to use everything to produce a conforming diagram.


It it a sketch if you don't include every detail that UML was specifically designed to convey? Yeah? Noting that a sketch is not a derogatory phrase here.

Note that, for many of these diagrams, they existed before UML. It was an attempt to formalize them. In classes, they are often used as code generation tools in an attempt to keep a 1:1 between document and code. Laudable, but I have yet to see that work out well.

To your point, I think, the same ultimately goes for document templates. Way too many are used prescriptively as a way to make a successful project. Much to the chagrin of the document writer when the project fails. Often miserably. To the point that I'm convinced most accurate project documents are after the fact. Certainly not waterfall to the project that they are describing.


>I think it mostly depends on maturity of the team

yep. kids out of college feel most inclined to use it.


>That time was mostly wasted as I've never used any of it and never met anyone who has used it.

So here's the conflict: if we agree that planning is good, communication is good, and that it is faster / cheaper to design before we build rather than rushing in to build something, then is it not a good idea to have something like UML in our tool kit? Perhaps UML was too complicated or overbearing, but I've always felt like a universal tool to describe aspects of the software we are building is fundamentally a good concept. Further, it feels like the right idea to document / designing up front in a way that increases buy-in and communicates what we are doing.

So if not UML, then what?


UML doesn't give me anything over random boxes on a whiteboard that are non standard, so long as the others understand them.


Activity diagrams do a decent job too. But yes, sequence and activity diagrams are all I have used.


It would be useful if everyone would at least be vaguely familiar with basic UML notation, i.e. action vs. object, multiplicity, connectors (association vs. composition vs. dependency vs. specialization, interface/implementation), class vs. instance, fork/join, swimlanes, etc. Otherwise you have to clarify over and over what means what in a diagram.

One powerful aspect of UML is that you can combine different diagram types. For example, you can use activity-diagram elements in parts of a sequence diagram.

For pragmatic use, “UML Distilled” [0] by Martin Fowler is a good introduction and reference.

[0] https://martinfowler.com/books/uml.html


It's certainly given too much importance in the curriculum, but the idea of having a common language (with standard semantics) for sharing design decisions is valuable.

I'd argue the issue is that because UML is not seen as valuable, everyone improvises their own dialect when drawing stuff, and so since nobody trusts people following correct diagram semantics (e.g., meaning of different arrow shapes), nobody trusts the resulting diagrams either, further reinforcing the notion that it's useless.


Sequence diagrams aside (they are useful!), I also felt far too much time was wasted on UML in my university course. The only small argument I can see in favor is perhaps to encourage some thinking and discussion around software design among students in an educational context, even if I've never once seen it used in industry. I can't say I'd miss it's removal all that much either though.


> I see them used all the time in FAANG.

Does that mean something? Besides you saying you worked in them? It’s a question ; I don’t know.


It is shorthand for saying that it is used by engineers working on high-profile, high-impact, software that ships to hundreds of millions of people. Some people might argue that these teams work on some of the most important software in the world. So if the people who work on this important software do a thing, like use sequence diagrams, that is a decently good signal that the thing is useful.


Also used a lot in automotive software development processes for example. Basically, the more safety critical a system is the more formal diagramming / modelling there will be part of the design and development process. It’s important to be able to reason about complexities of such systems across a team / teams.


I have an undergrad friend. Learning this last semester. It's still taught, and it's still not used.


Pretty sad assessment if it truly was useless as you say. I've found structured approaches to solution planning and design (UML, and others) to be very useful in most projects.


Was it easier the second time around? :)


I discovered FMC some time ago, and it really feels "UML, the good parts".

Fundamental Modeling Concepts http://fmc-modeling.org/

Used consistently it really helps a lingua franca across teams. Which was the UML aim all along, but it got caught in into "Enterprise Bloat" (like SOAP or XML)


In some ways, software development practices have degraded since that era. It largely has to do with the need for speed which comes at the expense of careful consideration, quality, integrity and the formal standards that support it.

In fact, I believe it pretty much killed the profession of software architect. Many teams had it as a dedicated role, and this indeed would be a person documenting/designing systems using UML or otherwise. And they'd know the classics, like memorizing all design patterns. Finally, they'd use formalized architectural decision making methodologies to justify tech choices.

Nobody seems to do anything like that anymore. Everybody is half-assing design or skipping it entirely. Solutions are reinvented and tech choices made on a whim by the loudest person whom won't see the consequences of it anyway. Because we've told ourselves that shipping garbage in short cycles is the one and only way to do things.


Yeah, hard disagree on all that. As someone who lived through that era professionally, and who has had an "architect" title in the past, I was actually resistant to ever stepping into that role, because I had so many bad experience with the Formal Architecture Methodology crew, who would produce the most absurd and out-of-touch designs. There's a reason that "architecture astronaut" is a term from that era.

Lots of people make bad designs today, sure, but it's not for a lack of formal methodologies, because average design quality was way worse back in the design patterns 'n' UML days.


Seconded. I once went to an IBM seminar with Grady Booch and seldom have I heard such an inane string of platitudes. Architecture Astronauts are the worst, and if there is one thing Agile can legitimately claim for, it's getting us rid of that plague.


I agree with you both except that architects are alive and well. I worked with an ex IBM architect a couple of years ago. Lovely guy but for someone designing software systems, I found it remarkable that he didn’t know how to write code.


I'd say this is a company red flag if they are still having these people around. About four years ago I was at a company that still had someone like that. Useless and highfalutin, he obstructed many projects thanks to an archaic director who thought he was still necessary.

I got out after butting heads with him constantly, and I don't think said company ever shipped anything meaningful in his entire tenure here.


I think you get these people in body shops - companies that basically rent out their skilled staff to other companies. The Architect sounds super important and is the most expensive resource, so obviously they sell it super hard.

When I worked there I was a product guy, and it took me an embarrassingly long time to understand why I didn't fit in... but yeah, I didn't last long.


I worked for a company not long ago that had leadership which felt you can't be a real architect if you write code. See ya!


All of these experiences resonate with me. The pendulum has swung hard to the continuous deployment model and just like the other side of the long arc of the pendulum not everything about it is good.


I don't think it was software architecture, UML or design patterns that was bad. I think the 'Open-Closed Principle' is one of the worst ideas to ever gain popular acceptance.

For anyone who didn't live through that time, the Open-Closed Principle states that software should be open for extension, but closed for modification.

However, you could also rephrase that principle to be: 'you should always prematurely abstract your designs'.

I think if abstraction was viewed as a negative to be avoided unless necessary, software architecture would have been far better off.

To be fair, premature abstraction is a lot of fun for those that do it. It's just those that follow who aren't so keen.


That is an import perspective. I've always struggled to understand why (esp. in the Java enterprise world) things were so complex. It took me a while to see through it and now I create abstractions when I need them, not just in case. That's why I don't like things like Clean Architecture.

E.g. I don't create an abstraction in case I someday need to switch the database from SQL to NoSQL, but when I need the abstraction right now for an alternative implementation (e.g. mocks for testing).


This is very on point. Premature abstraction has always a struck me as way now evil than what premature optimization can create.

Every class has an interface abstraction, or worse inheritance hierarchy, that adds friction to change, even though it's the only implementation.

When something similar comes along it's pressed into that shape, because the abstraction is already there.

It's like this meme video where it's hard to watch all the round and triangular pieces being fit in the square hole.

And the problem is that while software is more flexible than traditional architecture, hence soft, the complexity limit is also soft or virtually non-existent. So in software it's worth more to be simple and flexible than to have a plan that's long and detailed.


I’d argue software architecture has never been more healthy.

DDD Europe by all accounts was a resounding success. We’ve replaced older architecture activities like data modeling first with event modeling and Domain-Driven Design.

We’ve learned to embrace monoliths when appropriate and reduce complexity with bounded contexts.

We have phenomenal testing capabilities that didn’t exist 20 years ago.

We have a myriad of data storage tools.

We understand front-end engineering from an information architecture perspective.

We can design detailed architectures in tools like LucidChart very quickly with detailed solution details.

I’ve been at this for 40 years and I’ve never felt better about being a software architect.


You may also just be more competent with 40 years experience?


Even now I’m still learning. That’s the most important lesson.


Could we bring back the UI people from that era though? Trying to standardize good software is insane but having a consistent UI with standard elements for all programs that's usable with both mouse and keyboard was kind of nice.


Very nice. There were decades of thoughtfulness and science that suddenly got replaced with "well, it looks cooler".


There was a.lot of looks cool UI back then too, but mocking it was common, and none of it was successful despite itself.


Struggling to think of any B2B apps whose design could be considered even remotely cool.


> And they'd know the classics, like memorizing all design patterns.

That sentence is bringing back ptsd-like flashbacks.

I remember distinctly how much everyone thought that approach you describe was broken when it was the trendy thing.

I'm not sure how universal it ever was, but it was at one point a trendy thing that everyone thought they should be doing... and that so many people subject to found disastrous. [The phrase "architecture astronaut" was a common epithet, and not a friendly one]

That above paragraph [without the brackets], ironically, could certainly be said of "agile" more recently too. I don't know how universal it ever was, but it was at one point a trendy thing that everyone thought they should be doing... and that so many people subject to found disastrous.

But yeah, I'm pretty convinced going back to appointing some almighty deity architecture astronaut who isn't responsible for or involved in any implementation (let alone operations!), who hands down plans from on high after "memorizing all design patterns" and drafting some diagrams, never sullied by "contact with the enemy"... no thank you, but thank you.

----

Instead of just complaining about that though, what I'll say in addition is -- I think the real problem is that engineers aren't given the time to carefully consider top-level designs. it's a basic business/resource issue -- until engineers have more breathing room to talk to each other and research and consider and come to decisions in an unhurried way, the top-level design stuff will remain chaotic. It's not an issue of appointing an ivory tower "architect", or something solved by it.

Although sure, there should be senior and even "staff" or "principal" people with more authority/responsibility for higher-level designs.

Everyone should be responsible for design at the level they are working. Everyone needs enough time to feel like they are doing it well, instead of running on a sweatshop code production treadmill.


To be fair to the design patterns people, a lot of them got baked into pieces of infrastructure like web servers and frameworks, while others are part of frequently included libraries.

If I went through and inspected any Node, Rails, Django, etc app I would find many Gang of Four design patterns, but very few of them would be in the project-specific code. They got implemented well, and now programmers can build new things that would have taken too long to do before.

And that was the intent of the Gang of Four book, not to teach you patterns you should copy mindlessly, but to give examples of how to identify and extract useful patterns from the software you've already written and describe them to others. Since that is a lot more difficult than memorization, very few took up that work.


This. As a hiring manager I noticed on interviews that many engineers when asked about patterns, do not even realize how much they rely on them in standard libraries and rarely understand that it’s not a fixed collection of templates, but rather a way of working with code. If you identify a common design and give it a name, you can save a lot of time explaining solutions based on it.


Design patterns are missing language features.

http://norvig.com/design-patterns/


People who say that don't understand design patterns and have never worked on a very large project with code dating back a few decades.

Design patterns are what let you deal with the mess that results from that.


Good _design_ improves long term maintainability. Design Patterns (in the GoF sense) are basically orthogonal to good design.

Besides, Peter Norvig, even as of 1996, is not not the sort of neophyte you’re referring to.


So, here's something I hate on this site. Arguments that are made in the fashion "People who disagree with me must not understand X". When in fact you're often talking to some of the smartest people in programming on here (though, maybe not in other things.)

It's possible to understand design patterns and work in a language that doesn't require them as much.

Here's a talk about it from 2009 with regards to python.

https://www.youtube.com/watch?v=kSQFZrTDaQ0


Some design patterns work around a bad language, but most are about complex problems. They are a way of describing abstraction, and that is what languages are about.


Even GoF says that class and interface are part of language for C++ or Java can be a design pattern for languages like C.

I think in some ways AspectJ can make the observer design pattern obsolete in Java. I have arguments against doing it in aspect way, but it was revelation nonetheless.


this.

Back in the day (before open sourse became so prevalent), lots of software was designed from scratch with the minimal use of outside components and frameworks.

Today OSS frameworks (especially web frameworks) and libraries, PaaS, and Cloud provides you with an already baked-in design patterns. So there is less need in SW Architect and proper design.

Also, most of the Gang of Four patterns are just addressing the deficiences of OOP and older PLs, so if you're using a modern PL with closures/etc. there is no need in them.

EDIT:

Novadays a vast majority of design patterns are dealing with the complexities of microservices architecture, so if your product is a monolith then there is no need in them.

IMO the most useful design patterns are the ones dealing with error handling, reliability, and the essential complexity comming from the real world and human actors, not the self-inflicted accidental complexity comming from the bad design decisions. The good SW Architect should be able to help in avoiding that.


I tend to agree about the time thing, namely that people aren't given enough of it to sit down and take their time with design decisions. If you squint you can kind of see that "software architects" are a business's way of creating a "solution" to this problem, but with lackluster results.

That said, I've been at places where management did a pretty good job of making sure that there was enough time to do this kind of work with middling results. People have to enjoy the work or feel invested in a way that makes them care about "3 years from now", and sadly in a lot of places there's a lot of "whatever, it'll be fine for as long as I work here."

It seemed to me that in let's say 50% of cases, having the extra time didn't really matter. Assumptions about what was being built were proven incorrect not long after a design was laid down and work had begun, or particular engineering types would create increasingly complicated designs with their time rather than doing the hard work of distilling the problem, sometimes including $PET_TECHNOLOGY as part of their solution.

The "let's design something truly great" really has to be in a team's DNA in my experience or you wind up with a good design that isn't followed, one person who does all the work, a gold plated design, etc.


That's fair, and a good observation.

I'm not sure what to make of it as far as general narrative.

If people are creating stuff they don't really care about, of course it won't be very good? And there's a lot of money spent paying people to build things that honestly nobody would reasonably care much about?

One way or another, the pining for a software engineering world where we all collaborate on creating things that are maintainable products with high-quality user-facing experiences, and have the time to do that... well, that's not the one we've got.


That’s because software architecture is something any senior employee should be able to do, and it’s not as important as people thought.

Like many abandoned corporate practices, I think it was abandoned for good reason. It may have made sense under different circumstances, like when you had a large army of cheap offshore devs who could not be trusted to architect a maintainable application.

If I had some ivory tower “architect” trying to interfere with my work I’d be so pissed. Anybody I’ve seen with that title, that wasn’t doing 1P cloud consulting where the title means something different, usually had no clue what they were doing and had been given the title as a soft retirement.


As much as I do think that software architecture does matter (though it should probably be somebody with a staff or principal hat, not a specific job title) I recently took a job at a place that really does like architect titles. Mine's even "lead architect".

It is pretty funny when somebody runs into me and realizes for the first time that I have the job because I build stuff and write code, not because I'm good at LucidChart. I'm planning things out beyond immediate needs, but not because I'm looking for job security--it's because I've built the thing we're doing before and would like to not make the same mistakes I've made in the past. I'm over here demanding adequate standards of code and low- to mid-level design, and the "wait he's serious?" of it is sometimes honestly pretty fun to run up against.

I am good at LucidChart though.


Yeah, I guess I would in retrospect refine my opinion to “software architecture is iterable and not completely separable from implementation” or that “architecture (as imagined by distinct architects who are shielded from implementation concerns) is not important”.

Principal engineers and such who are still involved in operations, implementation, and more tactical approaches are who I also think are “supposed” to be doing architecture, but even then, more as guides and first among equals than as people who hand down decisions from on-high.

The fundamental issue I have with a separate architect position is that it disempowers teams and makes them beholden to decisions that they may not agree with (and which very well may not understand the problem to the extent they do). It sounds like you’re doing the better thing of running up and down the layers of abstraction so your contributions empower people rather than disempower them


> Yeah, I guess I would in retrospect refine my opinion to “software architecture is iterable and not completely separable from implementation” or that “architecture (as imagined by distinct architects who are shielded from implementation concerns) is not important”.

This, I'd agree with. You have to be at the coalface to know what the hell is going on. At the same time, you have to be cognizant of business needs and why things are the way they are, which is to me a fair approximation of "the job of a principal engineer."

(My other hat here is "head of API governance" and that's largely a business-flavored analysis of APIs being brought onto our company-spanning platform. I couldn't escape having both in my head if I tried.)

> It sounds like you’re doing the better thing of running up and down the layers of abstraction so your contributions empower people rather than disempower them

Ideally, yes. In reality, I work for The Phone Company, and The Phone Company hires a lot, and I mean a lot, of vendor devs. I am doing their thinking for them a lot of the time; the swerve is that I can and do write code (have released moderately popular open-source libraries on their framework of choice, for example) and so the usual development practices of "sure let's make a dozen packages for marginal functionality" don't fly.

I am disempowering them, because ultimately, we will eventually be cycling out our vendors and I will be the one who has to own their output. So that output has to be something I can live with. But this place is Processes Georg and should absolutely not be counted.

(I like the job. I will enjoy when I eventually go back to a shop where the developers have a reason to feel ownership over the work.)


German speaking. You can take German IT to get an idea what would have happened to SWE if you'd have kept those bureaucratic methods from the 2000s as the backbone of all SWE endeavors: a horrible, expensive, non-working mess with barely any progress.

I think what many people, esp. from outside the SWE world, don't get: Software engineering is a deeply social kind of work. There are dozens of solutions for the given problems, you have to agree on one that works for all peers. That's the job. Optimizing it for drawing funny diagrams is not an issue if not for communication.


I'm also interested in the german software engineering culture. And also, heavy plus to this being a social problem more than a "engineers just need more time" problem. I tried to articulate this in another comment of mine but mostly beat around the bush. This is more directly what I was attempting to say.


Well in my theory core reason is that Germany never developed a thriving startup IT economy[0] that was ever relevant for the GDP, especially not in comparison with the industrial sectors (cars, steel, chemistry) and so IT completely got ignored by politicians. That resulted in no one who'd challenge the biggest gatekeepers Telekom and SAP, so they lobbied for and enforced whatever they wanted[1].

If you study CS in a German university you can easily get an MA without being able to write software at all (I personally happen to know several people). German universities teach what is easy to teach top down and test for: The textbook stuff that came out of the whole Java EE/OOP/SOAP/UML sector. You barely get practical coding lessons and can avoid them completely if you want. The academic sector never realized how crappy German software products are and never bothered looking at what Big Tech is doing. With given data protection and soon AI regulations, as a university you'd have a hard time collecting enough training data because your law department would step in referring to the current legal insecurity (I've heard stories from friends).

Then we have this little crazy island Berlin which up until maybe 10ys ago was mainly driven by the infamous Rocket Internet "startup incubator" which is led by a couple of MBA sociopath billionaires, trying to copycat everything from SV and then sell it back to the SV company whenever they wanted to start conquering Europe. Thing is they never really developed enough SWE excellence to get the copycat successful in Germany or anywhere in Europe (with some exceptions).

Third example? Here you are: Today I learned that the gov't already decided 20ys ago that they want to provide all usual governmental services online. 20ys later they (allegedly) poured 3.5Bn EUR into an unholy setup of consulting businesses, incompetent civil servants and a panel of software architect astronauts who could never really agree on things. All their deliverables are click-dummies, gazillions of PDFs with SOAP/WSDL/OMG/UML thingies and prototype projects rolled out in "experimental" cities. So if you happen to live in Bremen you might be able to register your dog online but not in Berlin. Therefore in Berlin you might be able to get a license plate for your car online. Pretty much all governmental projects (Covid vaccination registration, special governmental aid for students because of high inflation, etc.) broke down because all their systems are incapable of handling more than maybe 10k visitors (in my theory it always breaks down whenever the biggest single Oracle DB host they could buy is going down).

Germany has some decent software engineers, especially if they're self-trained and not brainwashed by one of the universities or big corps. But the environment manages to regularly piss them off and make em emigrate to somewhere else.

Ouff, much text. Hope at least someone enjoys reading it.

---

[0] This is because if you start a company in Germany you're faced with horrible bureaucracy wrt taxes, laws, politics, governmental authorities, etc. For example, you're forced to pay for a membership in a funny non-IT institution called "Industrie- und Handelskammer (IHK)" which essentially consists of a crowd of old men who are officially supposed to lobby for you and create a networking environment but if you ask them something like "hey, can you tell me how many companies are having problem XYZ right now?" they will tell you that they don't have any numbers and have no means to collect them. In 2023 they still send out a meaningless paper printed magazine. So not helpful at all but take a significant share of your gross turnover mainly to pay for their pensions. Additionally, with all the regulations the governments set up over years they're now facing a significant cut in civil servants because Germany is getting older and older. As a result they're not having enough people to enforce or check regulations in time and never managed to develop any IT-based systems. Big problem with the influx of refugees in recent years and affects many other concerns as well. Finally, there's this cultural difference to, e.g., the US that average Germans are not business-savy at all. If you tell the average German mom that you want to start a business, she will tell you that you're a dreamer and should get a proper job. Germans generally tend to think that companies are something god-given.

[1] They are still the go-to businesses if the gov't quickly needs something, like the Covid tracing app which German tax payers AFAIR ended up paying 120M EUR for (lol).


I just read up on the IHK and it sounds like the equivalent for business that an employee pays to be a member of a union. The fee is 47 EUR + 0.14% of gross income, which is not exactly "significant" compared to the other fees, taxes etc.

As for the old people running the IHK, from what I read, the membership is one-company-one-vote, so what stops people running for election?

As for the influx of refugees, that is a distinct advantage to Germany of an increased availability of workers, including many educated Syrians and others.

So it sounds like there's a problem with entrepreneurship in Germany outside the engineering / petro-chemical businesses. There's also a problem with your political choices due to the usual issues prevalent in every Western country. An aging population, the effects of the "financial industry" (a misnomer if ever there was one), the effects of climate change, etc.


I'm not familiar with how IT works in Germany. What's different about it? Got any stories?


One word: SAP.


Oh no. I'm so sorry.


We stopped doing that stuff because it was useless. We eliminated those practices and that profession because it was actively harmful to making good software. All those decision making methodologies consistently lead to worse technology choices than one dude actually trying to write some code with the thing for half an hour.


UML is mostly useless, but thinking ahead, even a bit, has value. I've seen stuff shipped in a sprint that was not used by any actual end users, only to be "redone" in the next sprint.


My experience is that shipping stuff that doesn't get used by any actual end users is more often caused by thinking ahead too much than by thinking ahead too little.


I've actually seen it both ways: whole features (or even products) that were too early. But on the implementation level, I've seen someone pick the wrong thing (library, database, whatever) "because it was simple", only to have it be thrown out before getting any real usage.


"Pick the wrong thing and then retract, losing some work" is as close to optimal as you can get. Iterate.

The much worse outcome is "pick a thing, fiercely defend it whether it helps the ship to float or to sink".


I agree that outcome is worse. However, both extremes are bad. The optimal approach would be to do a little bit more work upfront, pick a "better" thing (where "better" is at least more than the next sprint or 3!), and use that.


Even then I'm not sure. Sometimes the quickest way to discover the problems with something is to try it on the real product.


The problem is that the expection was to document the software design in excruciating details (e.g. class diagrams with all fields and methods, etc.) before any coding. To the extreme you were meant to all the coding in your head and a Word document before doing it again in your IDE.

This wastes a huge amount of time and usually the documents become obsolete as soon as you try to actually run some code.

This is the very issue the Agile manifesto identifies and proposes a solution to when it says "working software over comprehensive documentation". 'Comprehensive' is a key word as they don't mean NO documentation but effectively just what's needed to plan and help people understand the code.


I don’t think this is any more true than it ever was. If anything, the rise of open source libraries, GitHub and friends has made it easier than ever to reuse software and avoid NIH. This has lead to a different set of problems, but relative to when I was writing C in the 90s I’d say software reuse and design is far advanced from those days.

I do however agree that design is sorely missing from the current software project management zeitgeist, which means it’s done in a more ad hoc way. People are taking frameworks like Scrum far too literally, and I agree that in some cases there is little vision or overall architecture because the framework doesn’t include these things. There should be design and review activities both before and after coding, but these are largely neglected from most project management frameworks. Scrum for example doesn’t even include backlog management, which in my work is a critical activity.

While that doesn’t mean design, architecture and scope management is not done, it’s certainly less visible than it might be.

All of that said, I’m a big believer that the ultimate expression of any design is code. The design artefacts that come before code is written are scaffolding, age quickly, and are soon useless. So there’s not much need to keep them around, and they are mostly of interest as preliminary directions.

In this regard I’m a big fan of Jack Reeves essays from the same period at https://www.developerdotstar.com/mag/articles/reeves_design_...


> In fact, I believe it pretty much killed the profession of software architect.

And thank goodness. I am all in favor of more consideration, more quality, more integrity. I like good architecture. But I think it's a giant mistake to create a software architect role. Every company where I saw that in practice, the average architect was a bloviator who did no actual work but was very excited to tell everybody else how to work.

At the code level, this resulted in a lot of messes. The edicts and white papers sounded good in theory to the kind of people who decided the bonuses of the architects. But frequently they were unworkable in practice, causing a lot of code that conformed to the theory but was a pain to actually deal with.

I agree with you that a lot of things are half-assed and rushed, of course. But we are never going back to a world of 18-36 month release cycles where people could stroke their chins for a quarter or two before building anything. Instead, we need to move in the direction of continually investing in quality, so that the design of systems improves over time. Something that I think is actually easier, in that waterfall design practices locked in important decisions early on, when people knew the least.


We haven't told ourselves that shipping garbage in short cycles is the only way to do things. The market has pretty much determined that companies that over invest in formal software design activities lose out competitively in the long run, and the survivors are the ones with the right balance. You're welcome to prove the market wrong.


> The market has pretty much determined that companies that over invest in formal software design activities lose out competitively in the long run

The same market that gave us AT&T, Comcast, DRMed IoT Juice presses, FTX, Enron, and so on? Not sure it’s wise to conclude that the market produces optimal solutions, or even incrementally better solutions than yesterday. Especially in domains with extremely long feedback cycles, like organizational trends.

I DO think there is some hocus pocus market zeitgeist that does something resembling gradient descent, but it’s acting over very long cycles and there’s a ton of room for cargo culting, grifting, and opportunistic grabs along the way. Heck, marketing is an entire field dedicated to affecting “rational actors” in the marketplace.


AT&T, Comcast are great examples of companies that have tried to expand into online video distribution, and completely lost out to companies with better software management practices. Both of these companies had a huge leg up 10 years ago, in terms of customer base and equipment (set top boxes already present in many US households). Neither of them is more than a small blip now on Netflix's radar now, who have been able to leapfrog into installing their software on smart TVs. The margins they have and their growth rates are much, much smaller than software centric competitors.


In truth it depends on the software. For web or phone or desktop apps which can be upgraded on the fly, short cycles are ok.

On the other hand, Boeing's crappy software practices resulting in the 737-MAX crashes are an area where short-cuts killed people.


So much this. Coding to marketing dictums on the cycle of days/weeks is completely NOTHING like engineering solutions where subtle design issues can make the difference between life and death (or IRL super good thing A and IRL super bad thing B).


And they just shipped garbage in longer cycles... as anyone that had to deal with such designed legacy apps can confirm.


The pendulum swung, and now we bias towards action rather than design. But to be honest, the "software architect" era wasn't great either, with people in the role sometimes not qualified, spending months on an architecture behind the curtain, releasing some over-verbose spec document that doesn't match the product needs, cargo-culting the latest fad from Martin Fowler.


Just as well we no longer cargo-cult any practices these days.


The problem you're describing Exists in all engineering disciplines. You don't hire firm to feasibility studies, develop a design plan, do a materials bid, sub contract clearing and construction for your mailbox installation do you? Most people hire a dude hanging out at Home Depot and pay him a flat fee.

The risk and complexity dictate how much architecture and engineering are involved and with all things, businesses will try to get by with a bare minimum. The automotive, chemical, and other industrial sectors invest a lot in software architecture and enterprise architecture because the risk is so high. But even a medium size e-commerce platform has very low risk or complexity.


while i feel as if i largely understand what you're pointing out, i kinda want to offer my own speculation as a SWE who is very guilty in thinking in terms of sequence diagrams (and also more formal UML) -- sometimes UML is so fucking bogged down with (impl) details, i am just over here going "i don't give a shit about the (impl) details, i just want to know the abstract concept(s), and logical flows, and focus on necessary system interactions" -- bc i can (and should at this stage) worry about (impl) details later.

i prefer to reason about a system and component relationships using, say, a single word as representation, instead of glaring at one or many inheritance directionals, interface details, and other "field" information, which is usually conveyed in a UML node.

i do not think the lack of formalism is a degradation -- we work with abstractions after-all, so it makes sense to further leverage that fact and express things simply, at a high-level, and straight-forwardly -- you can pack a lot into a single word.

of course as the nature of any tool, there is a time and a place for its application. but i don't think it is fair to call it a degradation.


I've noticed the "over-detailed" problem a lot at my company that has dedicated "software architect" roles. IMO part of the issue is that most of them do not write any code anymore, but 'architecture' alone just doesn't have a full-time job's worth of stuff to do (or they're bad about finding it). The 'solution' they end up on is just treating UML like code and specifying exactly how they would have written the whole thing - which is a huge waste of time when they could have just been writing real code.


Software architects, much like real architects, often end up with absolutely beautiful elegant designs which are unfit for purpose and have leaky roofs.


And other architects make buildings/software that gets done on time because unlike babies adding people can make things go faster (it is always sub linear, but good architecture can get close to linear, while bad can go negative slope)


> Everybody is half-assing design or skipping it entirely. Solutions are reinvented and tech choices made on a whim by the loudest person whom won't see the consequences of it anyway. Because we've told ourselves that shipping garbage in short cycles is the one and only way to do things.

I agree. I don't personally fetishize the old slow processes nor ivory tower architectures, but I think there's a healthy spectrum between that and the complete 'software fuckery' (https://web.archive.org/web/20160308032127/https://medium.co...)


Now hardware and bandwidth is so cheap now you dont have to care about design so much. Make a service, JSON in JSON out. Some kind of loose versioning but no formal schema and fix it up as it goes.


Sequence diagrams and state machines should be probably taught much better in CS courses. You generally see state machines in a theoretical course and things like sequence diagrams in an object oriented course.

What we should do is probably in your advanced data structure programming course that everyone has to take is to create a model of an elevator and diagram the behavior using a sequence diagrams. This would be achieved by using an associative array AKA a map [1] that would represent where the elevator is and what it has to do next (current state and next state based on input).

If you program this correctly it gets around using unit testing even because you have diagramed and all parts of the system are known and can just be gone through and that would be a sufficient test. An example of a library that implements this is at https://pypi.org/project/python-statemachine/

[1] https://en.wikipedia.org/wiki/Associative_array


Overdesign is one of the worst things to do. Trying to make something fit perfectly into some obscure pattern with horrible class hierarchies. Look at the linux kernel sources, it does not look "beautiful" to some architects but the actual ideas and patterns are simple, anyone can jump into it if they know C.


No, that's the era of worst, shittiest legacy software to date. The computers were just fast enough to handle layers and layers of extraction and oh boy industry loved to pile them one and UML was just another way to pile up the rot


I think the industry is diverging.

- There's very high-level engineering occurring ('computer science'). I always assumed this would be at places with 'web-scale' problems, but I've been seeing amazing work in local (not-web-scale) product companies here in Australia.

- On the other hand, I think a lot of other development work is not much above building flat-pack furniture. I guess this is where lo/no-code solutions will thrive

The important thing is to recognise which is being done, and which practices apply each case.


In my short experience this is mostly due to OOP falling out of style, being replaced by a more hybrid approach that combines a lot of functional programming, procedural programming, and OOP - that, together with newer languages providing alternatives for inheritance (traits/concepts), and such, you rarely find yourself needing much more than visitor- and factory pattern.


I am curious to know what are these “formalized architectural decision making methodologies” ? Any books that u know of that cover these topics ?



Oh this! I suffered learning this during college ;)


I disagree. Software back then used to just fail as often, there was just less of it. Now there's lots of working software, most software sucks and lots of it fails. But the people paying well, and enabling their teams usually get good software at a reasonable price reflecting the complexity.

The project management triangle never stopped existing.


Apparently, you probably haven't started a real business or created a real successful product . This is what people believe when they are just a cog in the machine and they believe they can spend countless time designing or planning for things that are out of touch with reality


Has anything replaced UML? I mean, if there are teams out there that aren't shunning design, what do they use to create architecture diagrams?


I can’t remember where I read this (Martin Fowler maybe?), but I agree that “box and line” diagrams should be enough for most design cases.

I was eager to use UML when it came out but I agree with the article, the only thing I kept was sequence diagrams. Most of the rest of it was just a complicated way to represent stuff that is best represented in code or plain documentation.

I’m also not afraid to use stuff that’s out of fashion. I use simplified ER diagrams, and even flow charts on occasion.

But the tool I use most often in design is SQL DDL…

I have also used Mermaid (using a modified Docsy theme in Hugo) and it is also great. Using Typora to edit Mermaid is also excellent.


I like use case diagrams as well. Thing is these should be short lived and used to make specific illustration in specific time. Not some holy grail of documentation that people should take as some god given truth for whole life of the system.


Yeah, I posted elsewhere in this thread that I'm a big fan of Jack Reeves' essays on "code as design"; namely, that the code is what's important, and the design is just scaffolding.

"The following essays by Jack W. Reeves offer three perspectives on a single theme, namely that programming is fundamentally a design activity and that the only final and true representation of "the design" is the source code itself. This simple assertion gives rise to a rich discussion—one which Reeves explores fully in these three essays."

https://www.developerdotstar.com/mag/articles/reeves_design_...


Yes, actors and actions. It's the crux of the use case. I'm still operating under the assumption that UML invented the modern use case. I will not be googling this next.


Aggressive IDE-supported semi-automated refactoring has replaced UML. Modern languages have been designed with toolability in mind. Modern IDEs allow ruthless redesign of the architecture of code on the fly, allowing architecture to be modified to reflect actual need rather being a fairytale written before the first line of code gets written.

Teams aren't shunning design. They're just doing it incrementally, because as it turns out, trying to design a piece of software before the first line of code is written doesn't actually work.

Even in peak UML (2000 or so) there were no really practical UML diagramming tools. (Rational Rose was impractically expensive, and buggy enough that it was a race to see if it wouldn't crash before you actually finished your diagram).

Corel Draw had passable UML diagraming support. But I don't imagine it still does.


C4 is more than adequate.

Draw the boxes and lines and make sure that people understand what they mean. Describe the system from the perspective of the reader. Just like a real architect will have different drawings/plans/elevations for their customer, the planning authorities, and the builder.

The architecture diagrams are meant to communicate the design, not comply with some over-worked standard.

UML was a clusterfuck that evolved from the trifecta of late 90s OOP (inheritance not composition), design patterns that mostly provided fill-ins for what was missing from Java as a language, the ridiculous concepts of generating code from diagrams that could be regenerated from code, which never, ever, worked, in the same way that ORMs have an impedance mismatch between OO and relational logic.

It was yet another silver bullet, the late 70s/early 80s had "structured programming", the late 80s/early 90s had CASE, the 90s had all the stupid diagramming tools where people argued about the shape of the bubbles and what arrows to put on the lines.

There was also the Capability Maturity Model, where everyone was trying to get to "Level 5" which was only useful if you were doing exactly the same software over and over again, along with the "6 Sigma" and "Black Belt" nonsense.

The 2000s had the "iterative Rational Unified Process" (an excuse to sell expensive tools from IBM), along with CORBA et al.

The last decade has suffered from the Agile-with-a-capital-A and especially "Scaled Agile" which is just an excuse for project managers to again treat programmers as fungible, while losing all of the affordances of actual project management, like GANTT, critical path, S curves, earned value etc.

Sprints are nonsense, so is "t shirt" sizing and velocity and burn down charts, and retrospectives and scrum "masters".

There are a couple of useful things:

* Domain Driven Design, using Names That Make Sense To The Customer

* Entity Relationship Diagrams, where the Entities are the same Names as the DDD

* State Diagrams for those Entities, describing the events that will cause them to change state

* Event definitions that match the State Diagram transitions, where externally generated events end up being your external API.

Bah, I've been in this industry for almost 40 years and its the same shit over and over, just with different names, and different consultants selling expensive courses.


Ok boomer. :-P

Having lived through both, modern development practice is superior in practically every way, most especially with respect to quality.

The problem with Heavy Development Methodologies is that they didn't actually work.


Don't take this as an endorsement of BDUF practices and the like, they're as fraught as anything, but it's important to also recognize what modern development practices trend hard towards themselves: the trapping of oneself in a local maximum.

I have a blog post in the works about this but it's not ready to share; in short, I can't help but notice that hyper-focused, optimize-for-time-to-market, minimum-viable-product projects have a real, and frequently killer, problem once you have built your initial, hopefully-better mousetrap. Almost every company I've ever worked for has gotten that initial mousetrap done, tried to expand horizontally to actually have enough stuff-that-works-together to actually sell, and fallen on their faces because that initial super-specific development effort created not just code, but product assumptions that are prohibitive to unwind.

Most of them hit the hillside because reality is no longer playing ball with their (frequently VC-driven) needs to cut scope and ship.


So sad and so true


Yeah, Mermaid gets it. Mermaid . . wait a second . . oh, ok, these are the same people that do Mermaid.js, they're just trying to make a living doing it.

Another useful chart type provided by Mermaid.js is the git diagram, which I use all the time when brainstorming change processes, especially for other folks who might not be git-conversant.

https://mermaid.js.org/syntax/gitgraph.html


I find Mermaid's syntax difficult to use and understand past the simplest examples. I've also run into some strange edge cases with it, but can't remember the details right now.

Out of these text-to-diagram tools, D2's syntax seems the friendliest to me. See https://text-to-diagram.com/.


Is there a way to get a git diagram out of D2? It does have nicely streamlined syntax.

One thing that worries me is that the profusion of text-based graph description languages will result in a family of software that's unparseable due to its success. We have Graphviz, GNUplot, PlantUML, BlockDiag, Mermaid, Kroki, Vega, and too many others to count - but we don't have a Pandoc.


pandia has a nice ring to it, wikipedia:

In Greek mythology, the goddess Pandia or Pandeia was a daughter of Zeus and the goddess Selene, the Greek personification of the moon.


Thinking "the only good thing" is understandable, since there's a lot of noise, but that talk discourages people from learning a lot of good stuff that's buried.

Before UML, there were many methodologies. UML was started by a unified team of some of the most noted OO methodologists.

There was a lot of good thinking, but one of the adoption problems we had before UML (as tools developers and methodologists) was that many people got a poor introduction to it, and missed the whole point.

For example, on a diagram metamodel that emphasized formalizing and visualizing the relationships among objects... some people would think it was a TPS Cover Sheet task, to laboriously draw their class inheritance hierarchy and summarize their attributes and method signatures, for purely corporate bureaucratic reasons.

You could also contrast people who needed to build complex embedded system behavior, for whom Statecharts were a very valuable tool for keeping the model manageable -- distinguished from people who were told to do it when it wasn't appropriate, and they just kinda tried to document their code control flow by misusing that same diagram notation.

Today, I can't blame people for not understanding, since most examples they'll now see are incorrect noise, and the current UML spec itself doesn't do this reputation any favors.

Half-seriously, maybe we could use a reset, in which people who neither know or care what they're doing stop being required to go through the motions of pretending to do it. Also stop trying to have people who don't actually use it teach it to students who just want a good grade in the class (for their FAANG job application), and who have little-to-no experience necessary to appreciate the real thing. Also don't make it a marketable resume keyword. Then the only people who will be doing or talking about it are people who care enough to figure out the genuine merits, with no incentive to pretend.

Then, ideally, the only time others will see it is when someone whips out a formalized diagram for something complicated, and proceeds to start explaining it by walking the audience through the diagram... Some people in the audience will get the tingles, as they're not only understanding, but they're also sensing a very powerful modeling tool that would address some problems that they and their team have in their work. (Realistically, this would start to happen, but as soon as it becomes a resume keyword or interview ritual, history would repeat, but maybe that time with a larger mitigating mass of people who understand and appreciate.)


>There was a lot of good thinking, but one of the adoption problems we had before UML (as tools developers and methodologists) was that many people got a poor introduction to it, and missed the whole point.

Not really, UML was doomed from start due to increased time to market metric. And frankly you can read the code if you are working on the already built system. Remember that software usually does not need to be correct, or even well designed to support a business case. It is one of software greatest strengths and weaknesses compared to other disciplines.

It does helps in some integration tasks, and works decently as schema definition language that's format agnostic.

I'll also concede that state diagrams are good, but well.. they are also quite simple.

Then there's readability issue - unless you worked tons with UML you WILL make mistakes when it comes to object relationship counts, and there's a lot of other non-obvious gotchas for newcomers(i used to teach reading entity relationship diagrams to non-technical people so they could understand the model they were working with, as it was a part of legal bill).

Not to mention that only proponents of UML that i've personally met, were people who were stuck in academia, and never worked on any real software product..

And even for R&D work on a harder problems it's usually easier to write a toy side project and test solutions.

Code you write in such toy project IS an entity relationship model. Tests, and example usage is a happy path of sequence. etc.


You might enjoy looking a little deeper. For one example:

> I'll also concede that state diagrams are good, but well.. they are also quite simple.

One of the goals of state modeling is to simplify. A sufficient metamodel will include (in addition to states and transitions on events that people might see in automata theory and misc. CS education) less-simple features like concurrency, superstates, transition guards, maybe real-time constraints.

When you need a nontrivial system to actually work, in all cases, (not just toss your sprint task over the wall, and then do bug tasks later) good use of the abstractions of this "simple" diagram, and inspecting it, will tend to avert problems that even an unusually conscientious programmer might miss in just code. And it also helps communicate less-ambiguously to others who come later and need to evolve the code. Unlike formal proofs, it's pretty accessible to read.

(It can be too accessible, in that people who are accustomed to seeing hand-wavy business diagrams with ad hoc notation used inconsistently think they also get the gist of this diagram, while not realizing how much more rigorous it is. But that's not a big problem -- you just have to walk them through it, ask questions to confirm facts expressed, emphasize when something has to be precise and exhaustive, etc. The problems are more when the diagram is poor, yet the technical appearance confers an undeserved aura of respectability, and people who can't tell the difference, act on that, but that's not unique to these diagrams.)


> good use of the abstractions of this "simple" diagram, and inspecting it, will tend to avert problems that even an unusually conscientious programmer might miss in just code

Very good point. If you have expertly written UML and an expert viewer, they will get quite a lot out of it. One of the biggest issues I have seen with UML documentation is that it is extremely difficult to keep it in sync with what the actual implementation does. In one of my past jobs we actually had started an effort to quelch this by generating plantml from code but it never really came to fruition.


I used those features, i used to work with UML, and as i said - even teach how to read it to non-technical folks.

It helps in very niche cases - as i said mostly state diagrams, and non-trivial data models when you have to interop with different systems, or define a common data model. Haven't worked with designing protocols, but that's basically a committee work..

That does not change the fact that it is useless for 99.9%, or higher, of typical dev work, for typical dev systems - which are basically CRUD systems with some fluff. This is the biggest barrier for UML to gain traction, and it is completely incompatible with that.

And that's disregarding parity of diagrams with codebase - there's a reason why a lot of codebases reduce comments - as they go out of date pretty fast.


> increased time to market

I've seen people spend days and days trying to save a few hours of paper design time.

We often do quick paper or marker board designs, which often have some UML. You don't want to spend more than 10% of your time on this, but it can save quite a bit of time to market. If someone really wants the design saved somewhere, we just take a pic and upload it.


I think you are right and it depends so much on what you are building and in what context.

If shipping is expensive, you build something complex and correctness matters, you want to go all in on design and verification. You might even want to use TLA+ to verify your algorithms beforehand.

If you build a SaaS app that people use for leisure and is mostly CRUD, you probably don't need UML diagrams for the vast majority of your work. Exceptions will be the larger systems design where you still need to plan things like DB sharding approach, services etc.


Is ERD part of UML? I find ERDs useful representations of databases. They can be generated automatically and are very good for quickly locating the right tables for a query. So I wanted to add a comment stating this to the parent - but then I googled to check if they really are part of UML - and it seems that they predate UML and are not really related.


Correct uml didn't have anything to say about relational databases.

I use two types of diagrams:

Erds

and

Random blocks and arrows and icons that mean whatever I want them to mean for that particular diagram, based on whatever concept I want to communicate at that point. Maybe it's network architecture, maybe it's dataflows, maybe it's software components. Each diagram comes with some text explaining what it represents.


A subset of UML's static object modeling diagram notation (which they adapted from OMT) can be used like ERDs, for a relational database without an ORM.


I wonder if there's a parallel here with the journey that some people go on with respect to design documentation in general. In can be easy to have a mandate from on high that your process should be research -> planning -> design -> review -> implementation -> review -> ship, and that the design doc is the key deliverable between the design and implementation stages. You can even argue that this is an "agile" process, if there's a feedback loop that permits the design to be revised as further discovery emerges during implementation.

But the reality is that an awful lot of the software we build just isn't complex enough to warrant all this ceremony. It's not branchy, there's no significant error handling to talk about, no big new dependencies being added, no security implications, no long-term maintenance of an external interface— it's just grabbing widgets from pile A, frobnicating them, and dropping them on pile B. There's barely enough there to even unit test much less write a "design doc" about and agonize over in a meeting with bigwigs.

And unfortunately a lot of the projects that junior developers start out with look exactly like this. Boring, obvious code that does one thing in the exact boring, obvious way that you'd expect. So it's easy to fall out of the habit of thinking intentionally about design, until you're suddenly faced with a big thorny problem that legitimately has multiple paths forward, and what starts as a rubber-duck conversation with the person beside you eventually becomes "hmm... I think I need to write down a page or two that describes these options, gives background on why the choice matters, describes why the selection was made that was made; then maybe afterward I'll circulate that document to my colleagues for their review and thumbs-up, because maybe there's something I'm overlooking here that they could contribute to?"

Oh wait.


"real UML has never been tried"

significantly more interesting technological failures deserve dibs on this excuse


Some of these methodologies and their tools have been used very successfully. By people who built complex systems that really needed to come together and work correctly.

I'm addressing the masses of us Webrogrammers, who classify all that space as "UML", and dismiss it. Not everyone is just going to be tossing (ChatGPT-assisted) sprint tasks over the wall, while calculating when to make their next job hop. Some teams will have to build complex things that work, and they can benefit from informed and judicious use of abstractions and views on the system.


> I'm addressing the masses of us Webrogrammers, who classify all that space as "UML", and dismiss it. Not everyone is just going to be tossing (ChatGPT-assisted) sprint tasks over the wall, while calculating when to make their next job hop. Some teams will have to build complex things that work, and they can benefit from informed and judicious use of abstractions and views on the system.

On the contrary, people who are doing routine webdev are the ones who can afford the overhead of doing this nonsense, whereas for people who actually have to make something complex that works it will sink them.


Definitely, bad diagrams (as opposed to bad diagram notation) give UML a bad name. Your example of the misuse of state diagrams as a glorified flowchart is something I see very often.

Undergraduate students probably don't have the experience and design maturity to fully appreciate when and how to use most UML diagrams. I think you need to get stuck at design problems a few times before you can appreciate the need to capture ideas using the right diagram, and then you might be ready to adopt some standard so you can share your design decisions with others in a way that correctly captures and conveys just the right amount of detail.


Part of the disconnect is formal modeling methods vs informal. There is tremendous value in having a shared understanding of the architecture of system such as how components interact, the data model and any type of sequence or state modeling if needed. That can sometimes be accomplished just with ephemeral whiteboard diagrams and sometimes more complex situations require deeper analysis.

The challenge comes when trying to document the entirety of the system, and keep that up to date. The overhead is too high and complexity is such that it defeats the value of a model used to communicate shared understanding.


> Before UML, there were many methodologies.

UML really wasn't a methodology. It was a "visual modeling language" telling you how you should draw a picture of a code construct.

It didn't give much (if any?) advise on which development tasks you should do in which order. It was just a TOOL that could be used as part of your SW development activities.

I'm not sure how people used it. Did they first draw the picture and then code, or vice versa? Or maybe drawing the pictures was optional? A "Software Development Method" should answer such questions, among others.

A note about words: "Methodology" means something like "study of methods". A more proper word for individual documented or prescribed approaches to how to develop software would be simply "method". A "method" tells you how to do something.


I like the class diagram notation, although I get frustrated at the tendency to try to auto-generate it, which ends up including way more detail than is useful.


> Half-seriously, maybe we could use a reset, in which people who neither know or care what they're doing stop being required to go through the motions of pretending to do it.

UML. Use what you need, but need what you use.

Come to think of it, that probably applies to abstraction. And threads. And probably several other things.


Sounds like you never had to deliver something created with that god awful Rational Rose crapware. Who's genius idea was to use UML as a code skeleton generator? That is a perfect the recipe to guarantee your non-lead developers never gain any ownership nor intimacy with the software they are working on.

The only organizations using UML were bureaucratic heavy nightmare places to work too.


Sequence diagrams are very useful, but UML just adopted a representation that already existed decades before. For example, the original 1981 TCP specification in RFC 793 [0] has ASCII sequence diagrams (see figures 7 to 14, etc). I don't know if they predate that - less from that era is online - but I wouldn't be surprised.

[0] https://www.rfc-editor.org/rfc/rfc793

Edit: there are a form of sequence diagram in an early FTP specification from 1971: https://www.rfc-editor.org/rfc/rfc172.html


Sequence and activity diagrams were already present in Jacobsons original method, which again is based on the "Ericsson Approach" (originating in 1967), long before UML; they were called interaction diagram and state transition graph.


Who's Jacobson? Google doesn't turn anything up.

Neither does "Ericsson Approach" for that matter..


Have a look at e.g. this presentation: https://web.archive.org/web/20060622072014if_/http://www.tcr...

The state transition graphs of Objectory (Jacobson's method) on the other hand originated in SDL process and procedure diagrams, which were issued in 1976 by CCITT as the Z101 to Z104 recommendations.



Thanks!


Exactly, UML mostly borrowed from existing representations and its purpose was standardizing semantics and providing some consistency among the diagrams.


I thought UML was pretty great when it came out. It attempted to establish a uniform manner to model and communicate out software models / systems.

I still think it is great. People who never learned about UML tend to reinvent some such thing when they wish to communicate the same structures and information.

Then you end up with 10 different people drawing approximately the same information in 10 different forms, and all will require some level of explanation of notation and grammar for others to grok it.

A class diagram can contain a lot of rather valuable information.

I do agree that sequence diagrams are great, but if you rip them out of context and throw the rest in the bin it can only partially communicate, whereas if you had the object model to study as well as the sequence diagrams new connections can be quickly made.

To this day I prefer to create my object structures in a UML diagram. Plenty of software takes it creates the code, or takes the code and produces the diagram.

I love creating the diagram(s) when I first start working on an existing system.

What I have seen of code generation off of sequence diagrams have bit hit or miss. more miss. But reverse engineering sequence diagrams can be quite revealing.


I agree completely. I tend to do 'UML light', where I don't worry about the finer details (e.g. marking fields public or private). The best solution is one where I can sketch out the outlines of a class and have some tool automatically generate UML which I can then edit.

And C4 diagrams are great for explaining software architecture, which wouldn't have been possible without UML and esp. class diagrams.


I still remember of a professor in my university who has written also an UML book. He said it will be the most important thing in your software career and you will use it nearly daily in your future as software engineer. I was sceptical because it was too much preaching for my taste. Turns out I was right.

20 years later: I used it very seldom and definitely NOT in my daily software development tasks. Actually the sequence diagrams were really the most useful of them all for me too.


You're lucky if that was the most useless thing you learned. Many peoples' degrees were useless in their entirety!


state diagrams and sequence charts for me, for sure


I work with non-distributed (ish) systems, and I use watered down class diagrams much more often than sequence diagrams.

But even when I was working on more distributed systems, I found more value in state machines and simplified class diagrams, actually. I think most cases where you're using a sequence diagram, a state machine is a better tool. Both for thought, and for implementation. Surfacing implicit state machines can require some extra upfront design, but if you have a non-trivial amount of states, it's pretty much guaranteed to be worth it.


The problem with state machines that I run into often when using them, is "multi-dimensional" states.

When managing to get that right they are great, otherwise you get loads of edges...


I'm not sure what you mean by "multi-dimensional" states. Is it something that statecharts [1] can help with?

[1] https://statecharts.dev/


UML Statecharts are Harel Statecharts, which are hierarchical state machines. Described here:

Statecharts - A visual formalism for complex systems. David Harel. 1987

https://archive.org/details/7.-statecharts

Looks like you can also freely download the paper from here: https://www.sciencedirect.com/science/article/pii/0167642387...


The cross product of multiple state machines, I expect. If you try and use a single state diagram to encode the product of states, everything multiplies.


That’s where statecharts can help.


Not sure what you mean by multidimensional states?


The car can be in drive, neutral, or reverse, the light can be red, green, yellow, or broken, the seatbelt can be off or on. That's 24 states.


Ah, ok. That's kind of what I meant by upfront design, though. Yeah, mapping this out is gonna suck, but it's better than having it hidden in a bunch of ifs.


This title and article feels plucked out of my brain. Most of my work is a lot of integrations and the best way to get a room full of devs to get what is going on is via sequence diagrams. Every other method, formal or made up, is weak compared to a basic sequence diagram. You can expand from there if needs be but I find it gets people thinking in a more complete way, gets the details out and what ifs and what abouts, and then you have a nice target that teams hit with very high accuracy and completeness.

It is one of the few (only) diagram types that is easy to draw and fairly close to the real world implementation. Most other formal approaches are wonky abstractions that only true practitioners can make sense of.


I agree, not having to be an expert in order to understand is important!


https://sequencediagram.org/ is the best text-to-sequence-diagram tool. Its the only one of these which lets you draw sequence diagram with your mouse too and will generate text accordingly. Completely offline (localstorage) and also supports saving to onedrive and google drive.


This is insanely good, thank you very much for the link! Should be a great tool to "bootstrap" some diagrams during brainstorm meetings, or just to put into documentation, and be able to change them later.


Good tool. They really need to change the cursor on hover to give some hint that you can draw arrows that way!


I personally think Statecharts are a more useful construct than sequence diagrams. It encodes complex behaviour that a developer can use as a guide to develop.

More importantly it gives a good inkling of what behaviour is allowed and what is not allowed.

For example a statechart can encode that a traffic light can only turn green from amber and not from red.

It also has "H" history state which remembers which substate it was last in, when the system entered this composite state, the previous time. This allows to model behaviour that relies upon "memory" or past behaviour which is very common in real systems.


Agreed. We published a post recently about how to interpret a sequence diagram based on the 7 basic features and how our dev tool makes them automatic and interactive. https://dev.to/appmap/quickly-learn-how-new-to-you-code-work...


I've use sequence diagrams a few times. I find any kind of diagram tooling to be a tedious time sink. The resulting output is a talking point at best. Some window dressing for a presentation or internal document. Fluff you add to impress some easily impressed managers or clients. I always feel slightly dirty doing stuff like this. I have UML distilled on my shelf. A signed copy even. Haven't opened it in over two decades. It's obsolete.

Diagrams are communication and marketing assets at best. They are not creative tools. You create them for other people; not for yourself. And it takes a lot of effort to create them. If you measure the value of your time in dollars per hour (as your clients/employer should be doing), a good diagram can be several hours of work. So, it's expensive. A few hundred dollars is nothing.

Next time you see a diagram, ask yourself the question "would I pay 300$ for this thing?". In my case, the answer predictably is "hell no" 100% of the time. Asking the question is kind of answering it. Diagrams are rarely helpful. Either they are way to simple to add much value or way to detailed and hard to understand. There seems no middle ground here. So, you have three or four things with some arrows and labels. Well, yay! Tell me something I didn't know already.

Diagrams are low value window dressing. A literal waste of time and money. I do them rarely, reluctantly, and only when people really insist on them (and I'll push back a little). I find tools in this space tedious and frustratingly slow to use. I have more interesting things to do typically. If I do them at all, they are going to be a quick and dirty job.

It's not just me. Objectively, diagrams are very uncommon in software development these days. They are not part of regular software development processes, clients don't ask for them anymore, the vast majority of projects (and close to 100% of OSS projects) don't have them, etc.


> And it takes a lot of effort to create them.

Have you tried mermaidjs? It's probably related to mermaidchart.com -- I don't know. The first sequence diagram in the article boils down to this:

    sequenceDiagram
      Customer->>Bank: Login request
      Bank-->>Customer: Login approval
with a fancier image for the customer than a box. Try it out at https://mermaid.live (linked from https://mermaid.js.org/)

Or this one, from a service I worked on a couple of years ago:

    sequenceDiagram
      Title: Service Signup
      autonumber
      User ->> App: User enters email address and password
      App ->> Server: Makes API call to register the user
      alt is not registered or already registered
        Server ->> App: Sends a token for authenticated API calls
        App ->> App: Generates unique ID
        App ->> Server: Makes API call to link user's account with unique ID
      else some other error
        Server ->> App: Returns [TODO: add error cases here] error
        App ->> User: Display error
      end

I hated drawing diagrams with a mouse, but when I found mermaid, I started using sequence diagrams everywhere.

> Diagrams are low value window dressing.

It's about communication, as the article said. The people who needs the diagrams the most are often not tech-savvy.


I've used mermaid and lots of other tools. Mermaid is my quick and dirty tool of choice lately, actually. I sank a lot of time in using mermaid to get some sequence diagrams. They suck both visually and the process of creating them is also terrible. But it gets the job done as far as ass-coverage is concerned.

The example above tells me that you have a bog standard login thingy that behaves exactly like a login thingy should work (no surprises there). A great way to fluff up some document where you state that your login thingy should be a thing in your thing. Great example of why diagrams are a waste of time. There is zero useful/surprising information in there. Plenty of things I would challenge though. Why does the app have to generate the id for example? What's wrong with the email address as an id?

So not only is it uninformative, it's probably wrong.


> Great example of why diagrams are a waste of time.

I need something for the tech people, and I need something for the money people.

The diagram gives my target audience (again, non-tech people) an overview of the system. There were sequence diagrams for all the functionality we wrote code for. Technical specs and documentation were also written up, but the money people won't bother to wade through that to understand how it works; the dev team can read that.

> Why does the app have to generate the id for example? What's wrong with the email address as an id?

The example was extracted from a project I worked on a couple of years ago. I obfuscated what the "id" field is on purpose. If it helps, think of it as the unique device identifier used to send push notifications.

> So not only is it uninformative, it's probably wrong.

They were not inaccurate.


I think autogenerating UML from existing code to get an overview over e.g. your database schema, state machine or class hierarchy is fine. Hand-drawn UML is useless busywork, it is usually faster and quicker to just write the code. Also, autogenerated UML doesn't go stale as easily, since you can just automatically recreate it.

And I agree, the most useful diagram type is a sequence diagram. The rest is often useless because OOP has declined, weird arrow types are unintuitive, language features do not always correspond to UML features and UML tooling doesn't fit very well into common software development methods. E.g. you cannot 'git diff' UML diagrams; while it would be theoretically possible (git diff can take helpers), no software implements this, that I know of.

Overall, about UML's decline, good riddance. Throwing boxes and arrows at a whiteboard occasionally (without tons of formal semantics) is all anyone needs.


> Comprehensibility > Comprehensiveness #

> The most common failure mode for sequence diagrams is over-complication. (This also is the failure mode for most diagrams, as I wrote in an article on flow charts).

Agreed.

UML – with the goal of being a graphical language for _complete_ specification of a system (both for code generation as well as to have diagrams generated from code introspection) – has to be exhaustive, therefore fails at showing the big picture.

Use of UML that could accommodate multiple levels of abstraction would fix that. I believe this is what C4 [1] tries to achieve with 4 levels of diagram. Unfortunately everybody who invents a new diagram model also reinvents the wheel and throws the entire UML away. One could easily use UML visual language, but just standardise on using the 4 levels diagrams of C4.

[1] https://c4model.com


Although I never use any aspect of UML in my work, I absolutely benefitted from learning it. Learning how to model things in UML required me to change the way I broke down and formalized problems and solutions. That change stuck with me, and I've very much benefitted from it, for the rest of my career.


> the only good thing UML brought to software development

Interesting perspective, which differs from my own experience though. I found some value in class and package diagrams, which proved helpful in detecting problematic relationships between packages at a very early stage. Deployment diagrams have aided me in resource planning and communicating with deployment teams. On the other hand, I've personally never used sequence diagrams beyond academical assignments or simply satisfying my curiosity.

However I definitely agree that UML is extremely bloated. It introduced a myriad of problems so that they could sell solutions in the form of courses, books and certifications. Rational's acquisition by IBM certainly didn't help matters either. And the reference guide by Booch/Rumbaugh/Jacobson is severely painful to read.


Class diagrams and object diagrams are also really useful, for instance when making presentations.

The problem with UML is that the industry went overboard with it's usage, like it did with pretty much every tech trend.


UML was yet another fad overdone. Many IT fads do produce useful niches or specific products, but the impression given at the time is they'll replace most of what came before. That's rarely the case. I can list about 25 "trends" like that since the late 80's. Chasing fads has gummed up too many stacks and standards.


I'm still waiting intently for debugging tools to be able to create * object * diagrams of an running app. Debuggers are excellent, but I keep forgetting, if the object in question is @Hdj637 or @HU83NS


I just referenced the concept of multiplicity in the values of pytest fixtures a single fixture may return just yesterday and I referenced it in the context of UML to make my point. Numerical distinctions in instance counts and collections and what not is a valuable articulation I think UML encapsulates. Maybe there’s other ways to communicate this concept but if someone talks of multiplicity in software design UML is directly where my mind goes.


Sequence diagram is not the only useful thing. A mix of component + flow diagram can be helpful to sketch out architecture ideas too.

I use diagrams a lot at work, to the point that I get tired of drawing or writing UML code, so I built a text-to-diagram tool [1] to help translate ideas in my mind into diagrams. Quite fun to use.

[1]: https://chatuml.com


Sequence diagrams are great, but creating them by hand is a pain. It's not just manual toil, they also go out of date. AppMap (where I work) can generate them automatically based on runtime analysis of an application: https://www.youtube.com/watch?v=8l4-hNih_GQ


I am thankful I managed to completely avoid both UML and J2EE in my career. Those embody the worst of so called Software engineering practices that I saw in my 30 years of working. It’s governmental bureaucracy embedded into technology that made it demonstrably worse.


J2EE hasn't been called that in 20 years. It's JEE now, and it's much nicer.


State machine diagram maye useful from time to time.

Object diagrams are sometimes great illustrations of how data is structured. I specifically recommend object diagrams, not class diagrams for that. If you are learning unfamiliar system just by reading the code and making user interactions, its easy to loose track what is recorded where. Collecting your observations on a diagram you draw gradually will help to acumulate details and build a clear picture.

Maintaining several simple object diagrams for main usage scenarious may help future newcomer developers to quickly grasp the implementation backbone.


On a related note, I've found that GPT4 is surprisingly good at building basic sequence diagrams, either based on a description or for open-source projects. E.g.

* "Write a MermaidJS sequence diagram showing a banking application's interaction between a customer, authentication service, business logic, and database"

* "Include database transactions in the sequence diagram"

* "Include OAuth in the sequence diagram"

Or for an open-source library:

* "Write a MermaidJS sequence diagram showing a CRUD Flask application"

* "Include actual function names of the Flask and SQLAlchemy API calls"

* "Add a redis cache"


UML is really powerful but definitely could be more simplified to just be basic charts. The problem is that sometimes you need to see the full complexity to be able to start simplifying systems. The real issue involving a decades-old process that gets in the way isn't actually UML though it appears at first to fit that description. It's actually scrum. Scrum-based methodologies on how to deliver value to the business. Scrum eventually forces engineers to make trade-offs like how thorough their UML and regular flowcharts can be which is not a scalable way to build quality products. The charts should be precisely as complex as is needed to accomplish the business objective and ensure that the business is always able to be in the best starting position possible to continue working on a piece of code or a system with just that documentation as the starting point. Sometimes this does mean that a demo may be just a bunch of UML or simpler flowcharts that have more breadth instead of user functionality.

UML makes sure that product functionality knowledge can be more easily restored or understood from an engineering mindset. It encodes a lot of information into the format and simpler options exist too. But I have to say once again that you almost got it right as to which decades-old development methodology is pretty deprecated despite still being in fashion - it's not at the diagramming layer, it's at the project management layer.


I feel validated by this headline/post. I've said the same thing over and over in my career and am always astounded when I get pushback on using them as a means to describe systems and interactions, including from some managers (that weren't coders very long), that would claim "nobody uses them, just read the code".

I don't know how you communicate how systems work without them. You either create a terrible non visual version in the docs or a person ends up drawing them on a whiteboard from memory to pass that knowledge down.


I adore how well this article cites sources at the top. Gold links to let others build the case.

I don't know how important it could have been, but the threads here don't talk a lot about UML being more than just a design tool. We think of it as a way to draw pretty diagrams for documents. And we think of architect astronauts who alas all too often weren't great peers, weren't wise experienced seasoned sages. But there was a subtle current of an idea, that coding might evolve beyond source files of text that were hand authored. Computer aided code.

UML still seems like a potential good/great intermediary target for machine learning to me, or for refactoring. If we could better get UML snapshots of the code & transmogify them into new shapes/states. UML got damned as much as anything because it existed in a pre-predominantly-open-source world, where it was interlinked with wild & complex vendor tools & methodologies, ans these kinds of flexible abstract representation malleable code systems existed in limited/weak forms, deeply tied to ultra expensive 100% proprietary tools, that only a scant % of people who'd touched uml had any knowledge of, and few of them ever really witnessed any tapping the power of these possibilities.

What we think of uml is just a shadow of a bigger idea.


Sequence diagrams are good when you're already inside the system. Showing my age here but IMO, for sketching a high-level view of the whole system and how its components interact, nothing beats SSADM's data flow diagrams. You can go as detailed as you like. Break up parts into a new component: draw a ring around them, network traffic: note where the flows cross the rings and document the message sizes and rates.

Sigh Happy days ...


A strange conclusion for something that has been around for so long. It's a bit like saying the horse-drawn carriages are no good because cars.

When UML was released, waterfall was pretty much the only model for Development and was understandably slow, expensive and risky. UML with processes like UP were used to bring in some order and standardisation to the picture. I used to use it a lot and it was really annoying because I just wanted to code but even in the early 2000s, release cycles were long (2-4 weeks) and mistakes would take a long time to spot and rework.

Like all tools, we didn't always use it and we didn't always add 100% detail but we did use it. I don't use it as much any more because code is quick to produce, quick to review, to merge, to fail and to rework. That doesn't mean we shouldn't design up-front but there are many more use-cases where the time and effort of design is just not worth it.

I still use class diagrams sometimes, although they are not really something specific to UML but I have also used state diagrams, again for specific scenarios, but let's just understand tools and decide when and where they are useful.


UML would be fine if it was not treated as a design tool, but as an inspection and visualisation tool of already (partially) built systems.

The way it is today is that when somebody insists on designing in UML the design gets created then implementation is at best only attempted, before the whole thing gets scrapped and the actual working system is built because most systems are only discovered as they are being built.


Sequence diagrams are outstandingly good and useful. I have a very complex set of interactions between two different servers involving user interfaces, back ends and server to server fetches (somewhat like Oauth but more). Using sequence diagrams to document this complex process helped me identify mistakes and allowed me to simplify the process. Highly recommended.


So we're back in the sixties, even 20 years before Objectory, see e.g. https://web.archive.org/web/20060622072014if_/http://www.tcr....


With any non-trivial set of messages, seq diagrams become unwieldy. Always use communication diagrams over sequence as a first choice.


to the contrary, the whole reason patterns like sequence diagrams are so great is because they make it clear when you're trying to put too much information into a single visualization

your "non-trivial" is my "too much"


University in the late 2000's was teaching UML and out of all of the different diagrams and notations, sequence is the only one I have consistently used since then. The rest of the diagrams felt like they mostly were used in big design documents that stopped being written as agile (or a bastardized form of it) permeated more and more organizations.


I've been toying with flow-based programming again, and it works relatively well as an implementation choice for the same "high level" that sequence diagrams cover: The parts of a lifeline that need to wait are reified as stalled information packets, while request/response APIs are wrapped into nodes(which I've found is a good starting point for practical application - make a library of nodes from API calls). The rest is defined by graph wiring and data types.

As a graph model, FBP lets you fan out widely, but that is something you don't actually want to do most of the time: the benefit I am seeking it out for is in the bounded buffers adding backpressure regulation and debuggability. As such I've currently settled on mostly defining ports in terms of structured data types, then doing destructuring/merging/splitting in custom processors.


I disagree that sequence diagrams are the only useful thing in UML. I will say the the most taught diagram type (class diagrams) is not only by far the most useless, it is downright detrimental to use.

Wouldn't be surprised if the main reason UML ended so despised is 99% to do with class diagrams.

Sequence diagrams are really nice though.


UML isn't that bad but it isn't that good.

I was involved in a project based on OMG's Meta Object Facility. The MOF is a tiny system which was designed to be sufficient for bootstrapping an implementation of UML 2 but I found out that the bootstrapping is not so straightforward (I think you have to manifest some objects from UML 1 that doesn't exist in UML 2) and there are some chicken-and-egg problems to resolve but I think it would be possible to build something that, in several stages, can build a complete set of stubs for UML 2 itself as well as any objects defined inside UML 2. I got my project done without doing the whole bootstrap so I am still wondering... I've solved so many chicken-and-egg problems that nobody else has and it's earned me just about $0 and 0 cents.


For understanding the structure of an object in memory, and box and arrow diagram of a few example instance is worth a hundred diagrams showing type.

UML is useless to the extent that type is useless.

The exact shape in which constructed class instances are hooked up together can easily be where all the semantics lies.


Plantuml is a really convenient tool for drawing diagrams


I yearn for something that lets me draw sequence diagrams as human-readable ascii art (instead of declarative statements as with PlantUML,) but that is also rigorous enough to be rendered to a professional looking PNG when the situation requires it.


It exists. Ditaa.


But it is a great tragedy that we don't have some standardized vocabulary for sketching software architecture (ownership vs. flow of control between components, for example). Something beyond just boxes and arrows. To use for sketching and absolutely nothing else, of course.

I do still use sort-of-UML diagrams for sketching architecture at the 10,000ft level, after the software has been written as, as a kindness to those who follow in the piece of documentation that hovers somewhere around how to build the project. I'm not sure they are truly useful anywhere else. But they are useful.

Note the last fading vestiges of my Rational UML Training Course (unspeakable waste of time and money) in this diagram from the architecture page of one of my personal projects:

   https://rerdavies.github.io/pipedal/Architecture.html
I think they're interesting in that they borrow useful vocabulary from UML, but they also a need to invent vocabulary that (probably) isn't in UML. Also that it's probably not valid UML, but I don't care. And that the basic information it provides is which of seven files/classes you would start in, when trying to fix a bug, before you started crawling through the remaining 174 files and ~250-ish classes.

And most interesting of all: that there is no standardized notation conventions to deal with such diagrams that I would expect anyone to understand. I don't think it's reasonable to expect anyone who started programming in the last 20 years to even know what UML is.

My vision of Rational and Grady Booch, while living through that period...

- Grady Booch, an unassuming genius, who had a modest but really very good idea.

- Rational: An insane pack of underfed marketing guys, probably run by a relative of Elizabeth Holmes, who had absolutely zero understanding of the underlying problems UML was designed to address. Ever promising, never delivering.

Maybe it's time to revisit Grady Booch's modest but really very good idea, and consider whether it can be rescued from the horror that UML became.


Use case diagrams are also very useful, I find in clarifying why are we building something.


Hierarchical finite state machines are nice too.


Sequence diagrams are a secret weapon for designing any complex system. The y direction (relative time) actually means something, unlike most abstract diagrams. The x direction (actors) highlights the discrete set of participants. The arrows actually mean something - bugs and features and new problems all exist at interactions between actors and be systematically decomposed into work items, just find the arrows.

I know there are more formal tools (The P language comes to mind) but it's hard to beat the simplicity of markdown that gets rendered directly in github's UI.


UML as a 4GL and coding the “important parts” died and I’m glad for it.

Turning visualizations and encoding all interfaces and interactions made things cryptic.

Using diagrams sparsely to explain the main design ideas visually? Absolutely.


Sequence diagrams were not original to UML.

Sadly, I cannot remember the method they stole it from. I do remember it was created by a man working for Hewlett-Packard in Britain who published a book around 1993.

Does anyone remember?


TCP/IP Illustrated by W. Richard Stevens has sequence diagrams, although that book dates to 1994 which is right around the time UML was originally being developed. So, it's not a strong data point.

I'm positive you're right that sequence diagrams existed independently of / prior to UML though.


"UML 2.0 Sequence Diagram is strongly inspired by the ITU-T MSC"

https://en.wikipedia.org/wiki/Message_sequence_chart


There are sequence diagrams in the original 1981 TCP specification:

https://www.rfc-editor.org/rfc/rfc793

See figures 7-14.


I consider myself an advocate of sequence diagrams (of all other UML diagrams), so I wonder why the article doesn't mention that this type of diagrams is meant to depict testable flows the software components.

It's nice and all that you can clarify to your team what you want to gain (the big picture), and it's also nice if you want to explain how (details), but in my opinion it's not worth much if your sequence diagram doesn't explain test cases clearly and expect for them to be written.


I would really like to use UML as documentation language more often, but I have yet to find a nice drawing that is simple, stupid and runs natively on GNU/Linux.


I also had a lot of problems finding usable software for drawing UML diagrams. In the end, I used UMLet, it worked sufficiently for my tiny class diagram and was easy to use and I could rearrange everything minutely as opposed to the underperforming auto-layouting done by PlantUML. I also tried around with StarUML but somehow it didn't stick. Time has been wasted in trying to recreate the same diagram in all these tools...


I actually used a sequence diagram today. I put it together for a meeting, to provide some visual guidance while descussing the architecture of a component, and the expected inputs and outputs of it. Helpful for aligning expectations and to spark more dialog.

I also use hierarchy diagrams, but a lot less often. Those are used for documenting the architecure of classes that can be annoying to navigate their code through.


I like sequence diagrams for understanding unfamiliar or complex code that I need to work with.

Go in, head spins, make a sequence diagram, grok, then make your change.

They're awesome for that, and they make decent notes later, but realistically they'll be out of date by the next time you look at them. I recommend plantuml for this: very friendly syntax, git friendly, and easy to regenerate over time as the code changes


UML may be one of the slickest things to collaborate with ChatGPT on...It's one of those things that can be incredibly useful but more often than not the time suck just isn't worth it...well- just describe the type of graph you want in plain English and the type of markup to use.

For example this single prompt (real)...

”Please convert the following to a PlantUML flow diagram. Partition if necessary."

[Source Code]


I like the arrow semantics: Dashed with line tip is „depends on“. Straight with triangle tip is „inherits from“. Standardizing them is useful.


Sequence diagrams really shine when you’re documenting different parts of a system and the various ways these parts interact with each other.


You could say that's the only thing they can do, by definition.


UML is great to clarify situations that are harder to grasp with other communication tools. And by communication, I don’t mean only with other, but also with oneself.

Cerntainly UML is not always the best tool, but I see no point to pretend it’s a garbage toolbox no one should care about except for the only tool that ever helped for the work you had to do.


I really love Mermaid Diagrams. This is one of the best libraries to come out. I'm writing a book, and being able to generate diagrams with mermaid and then customize the CSS to meet the guidelines from my editor has been fantastic. I've started including a lot more diagrams in my projects and documentation as well. It's just such a good tool.


Thanks I am happy to hear that!


On the subject: diagonal lines in UML diagrams or not? I've seen both but somehow diagonals do feel wrong to me. I feel like only horizontal lines should be used, as is the case in TFA.

I'm also nearly sure that in the beginning it was the only way: there weren't diagonal lines drawn across the diagrams.

So: only horizontal lines or are diagonals cromulent?


As a general rule I've come to believe that anything with "unified" (or the like) in its title is destined for doom of there's a sales/marketting/monetization agenda behind it. And if it's a bunch of hackers who agreed to clean up a mess, then it is often successful.


I like state diagrams, but someone told me that they predate UML, I don't know, they're useful.

The bad thing about UML was demanding all kind of diagrams for every project, whether it made sense to do it or not.

UML is neither the only "method over common sense" software ideology out there nor it will be the last one.


I'm sure mermaid is nice but if you haven't tried https://swimlanes.io/ then you're really missing out! It's fantastically low friction; so much so that you can use it as a thinking aid.


> Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements.

I don't have access to the paper, but I can't shake the feeling that those "some UML elements" were boxes with class names in them or something like that.


I'm not a developer (more like a translator) and love sequence diagrams. They are so useful to explore system and data flow designs.

Also my favorite tool for them: https://sequencediagram.org/


UML is completely crap as a design language. But it can sometimes be useful as a documentation language.


It's crap at that too. It was an excuse to sell rational rose. It doesn't need to be excused.


i agree that rose was crap. but some other tools, such as enterprise architect were not so bad, despite the name. and the ent arch guys were always very helpful. this would be something like 20 years ago - gosh.


I used sequence diagrams in the 80s doing network protocol design. UML was apparently published in 1995


I'll just hop in here without reading the article and say that I've actually been using UML (plantUML specifically if that matters) to communicate with LLMs about software structure with some pretty good success.

Turns out computers are good at dealing with symbols, who knew!?


I use sequence diagrams the most, but I also like the state charts, and the simple class diagrams.


I am shocked - I completely agree with the author on all points!

Sequence diagrams are awesome. The rest of UML (and all the crazy Rational Rose nonsense!) was so much angst over such petty things on notation.

And then there is the dumpster fire called Enterprise Architect…


Also Service Blueprints. I find some flavor of Service or Business Process blueprint invaluable who collaborating outside of engineering to understand how a system will be delivered. These are just 90 degree rotated sequence diagrams.


Even the second graphic in that article is confusing enough that I would be asked to redraw it in a different way. A clever tool that programmers can learn and understand is often, not expedient or less efficient for others.



I remember times in 90's when we planned a software system in UML powered tool called Rational Rose. Oh my god its was clumsy and slow process. But yes, sequence diagrams are very useful tool.


I agree with the article that sequence diagrams are great, but I’ve also used state diagrams (not the formal kind from UML) a fair bit for sketching state transitions.


Weird, I just discovered and made my first UML sequence diagrams yesterday and today it’s at the top of HN. Super useful for describing my problem and quite straightforward.


Haha, I used to work on standardizing architecture of a large US enterprise conglomerate and we found out that the sequence diagrams were the only useful thing from it all.


Class diagrams are pretty good too? When done with just a few types, that is, and without all the complex unnecessary nearly-Java-specific stuff that they added later on.



UML was a good idea, but at some point it became this ever-expanding attempt to model every business process and left software far behind.


Wrong. Never used them, probably never will.

>> The programming use case died because, according to Hillell Wayne, “even most proponents of UML considered it a terrible idea.”

When Ivar Jacobson's Use Cases were added to UML it was a day to celebrate for me and many others. I still create them 30 years later with other methodologies, even ones with no direct tie to software development like Dr. Eli Goldratt's TOC (Theory of Constraints).


When will people learn.. you need to write down stuff. Pictures are a good addition to writing, not the other way around.


UML is also good for another thing, and that's been endless amounts of blog spam and article fodder


and consulting fees!


Several phrases from that era are still echoing in my head from time to time: Grady Booch, Rational Rose.


Sequence diagrams for showing specific interactions, box-and-line diagrams for showing static relations.


I reached the same conclusion within approximately 1 week of first learning about UML.


I don't really see them as an advance on the century old flow charts.


The flowcharts (that I've seen) don't do a good job of capturing who does what in the process, whereas sequence diagrams do.

Side note - my father, during his career, had a huge collection of flowchart stencil sets that he used; I really wish that I had a chance to grab them.


Am I the only one who finds sequence diagrams hard to read?


UML is also really useful for modelling relational databases


May I ask what you feel UML brings-to-the-table that we don't already get with existing (non-UML) ER-diagrams?


UML is both broader and more formal that ER


Can you give an example? Some try to put too much detail into ER diagrams in my opinion. A Data Dictionary is usually a better place for such details. ERD's should mostly be to illustrate relationships.

One trend/fad was to put words describing links between tables, but I usually didn't find such helpful. Maybe if the wording was done well it would help, but most seem forced in practice. Good naming takes experience. Maybe let newbies draft the phrases, but have someone with experience review it. I.e, mentoring.


> A Data Dictionary is usually a better place for such details.

Self-documenting code/schemas are an even better place.

...sometimes I feel like I'm the only one in the world who uses DB-level metadata (e.g. `sp_addextendedproperty` in SQL Server) to attach explanatory notes and other metadata to database objects, including columns and constraints - and it gets better because I modified my Entity Framework scaffolding templates to then include those comments in the generated C# code as XML-doc (or JS Doc comments in TypeScript) - and the entire DB schema is also kept in source-control (using SSDT).

Additionally, because CHECK constraints in SQL are declarative it means I don't need to write-up a human-readable explanation of (for example) the format restrictions based on a column in the CHECK constraint, because it's immediately visible and obvious (and yes, my scaffolding templates also include the CHECK's expression in C# code-comments too for-reference).

----

Another technique I'm a huge fan of now is using predicate-types (similar to dependent-types) by taking advantage of class-invariants: so I have my own zero-overhead (i.e. elided structs) like `NonEmptyImmutableList<T>` which immediately lets everyone know that if that's passed as a parameter then it won't ever be empty - whereas if the code used the stock `List<T>` or `IReadOnlyList<T>` types you'd have to write-up how that list should be used - which no-one should have to do.

I just lament that my daily-driver languages (namely C#) make it kinda tedious to define types like that.


>DB-level metadata

I've used it for years - usually auto-populated from the model that builds the relational schema (like you had done). Same for constraints. By coincidence, I just convinced my team to start using sp_addextendedproperty.

That's the advantage of using a higher-level notation, such as UML or code-first EF, for your ORM.

Good idea on the C# type naming conventions.


> Self-documenting code/schemas are an even better place

But they don't offer enough columns and detail for certain things in my experience. A shop-rolled data dictionary can be "shaped" like shop needs.


Addendum: I suppose we could use sp_addextendedproperty, but it's usually just easier to work with a "regular" table.


State machines and activity diagrams have their uses.


Oh come on. Uml database tables aren't half bad


is there any way for blind people to read UML diagrams? they don't seem very accessible.


UML is dead. Let is rest in peace.


I would also add Flow Diagram.


I'm totally agree.


object interaction diagram is better than sequence diagram


Statecharts[1] predate UML, and became part of UML[2].

From the time of their invention to the present, people have devised systems for serializing (i.e. as data structures and code) statecharts and translating them to (or interpreting them as) executable programs[3].

While they may not be all the rage and take some discipline to learn and use effectively (what software tool/concept doesn't?), they can be a valuable tool for building reactive systems:

   A typical reactive system exhibits the following distinctive
   characteristics:

   It continuously interacts with its environment, using inputs and
   outputs that are either continuous in time or discrete. The inputs
   and outputs are often asynchronous, meaning that they may arrive or
   change values unpredictably at any point in time. This should be
   contrasted with transformational systems, in which the timing of the
   inputs and outputs is much more predictable. A transformational
   system repeatedly waits for all its inputs to arrive, carries out
   some processing, and outputs the results when the processing is done.

   It must be able to respond to interrupts, that is, high-priority
   events, even when it is busy doing something else.

   Its operation and reaction to inputs often reflects stringent time
   requirements.

   It has many possible operational scenarios, depending on the current
   mode of operation and the current values of its data as well as its
   past behavior.

   It is very often based on interacting processes that operate in
   parallel.

   Examples of reactive systems include on-line interactive systems,
   such as automatic teller machines (ATMs) and flight reservation
   systems; computer-embedded systems, such as avionics, automotive, and
   telecommunication systems; and control systems, such as chemical and
   manufacturing systems.
— Harel and Politi, Modeling Reactive Systems with Statecharts: The STATEMATE Approach (1998) [4]

[1] https://www.inf.ed.ac.uk/teaching/courses/seoc/2005_2006/res...

[2] https://en.wikipedia.org/wiki/UML_state_machine

[3] https://userweb.cs.txstate.edu/~rp31/papersSP/stateMate.pdf

[&] https://www.w3.org/TR/scxml/

[&] https://github.com/jbeard4/SCION

[&] https://github.com/statelyai/xstate

[&] https://www.rtsys.informatik.uni-kiel.de/en/archive/kieler

[&] search the web and you'll find more

[4] https://www.wisdom.weizmann.ac.il/~harel/reactive_systems.ht...


I was asked to read Harel's paper as part of my thesis, I personally think this stuff is wonderful and foundational to computer science and computer engineering, researching issues like concurrency and system complexity. But I can see how industry programmers would find little direct use for it.


And yet the motivation for Harel's paper was his being approached by his nation's avionics industry to develop some concrete tech to help the EE and software engineers work with confidence in their development of reactive systems, ones with lives on the line.

Summary: https://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts....

Detailed account: https://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts....

   One of the most interesting aspects of this story is the fact that the work was not done in an academic tower, inventing something and trying to push it down the throats of real-world engineers. It was done by going into the lion's den, working with the people in industry. This is something I would not hesitate to recommend to young researchers; in order to affect the real world, one must go there and roll up one’s sleeves. One secret is to try to get a handle on the thought processes of the engineers doing the real work and who will ultimately use these ideas and tools. In my case, they were the avionics engineers, and when I do biological modeling, they are biologists. If what you come up with does not jibe with how they think, they will not use it. It’s that simple.


Yes, being practically-minded (engineering-centric) was an important aspect and intention of the StateCharts formalism. But the disconnect today I think is because demographically on hackernews aren't avionics engineers or biologists, they're more like FAANG / SV developers and as many have said here they just don't see the relevance (and call them architecture astronauts or whatever). An aerospace specialist would expend the effort to learn a theoretical formalism, but an iPhone developer, maybe not so much.


And yet many the web or iPhone dev will wildly smear ad hoc state machines within modules and across modules in their codebase via if/case, maybe dressing it up a bit with redux, et al.

As the codebase grows, teams of those devs will labor under the constant threat of suffocation from the thrashing-swirling spaghetti of events and side-effects they themselves authored and don't know how to tame.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: