If there were a way to produce bounded context boundaries by following a general pattern / algorithm, what would that be? I fail to see how they can be created usefully without lot's of conversations with domain experts plus elbow grease. It's the part of software that remains entirely art and not science.
* Try to mininimize the overlap between bounded contexts - loosely coupling domain models.
* Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts.
I tend to find any process that defaults to "have more conversations/interactions" defaults to wheel spinning without some sort of specific plan about what those interactions would entail.
> Try to mininimize the overlap between bounded contexts - loosely coupling domain models.
What if this doesn't lead to a more accurate representation of the domain? What if two contexts are just coupled in the business, for good reasons?
> Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts
This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?
It's like, building any system involving many actors and actions is hard, that has nothing to do with software. We're just digitizing the same patterns and behaviors that people have used to run companies for hundreds of years.
People want a playbook to be followed to arrive at a "perfect" domain model or architecture. I'm sorry, that sounds pretty farfetched to me. It reminds me of how we first started thinking about computability theory, when David Hilbert proposed that we should be able to devise an algorithm that could decide the truth or falsity of any logical statement (the Entscheidungsproblem). Hilbert was one of the smartest mathematicians to ever live, and he was very confident that this could be done.
Well, Alan Turing, Kurt Gödel, and Alonzo Church (not slouches in their own right) all smashed that idea with various proofs. The truth can often be counterintuitive. I am sorry that the world is complex, I also wish it weren't so.
I think you misunderstand. I said that I thought that DDD would proscribe something along these lines. I am not endorsing this as a fully complete, usable process, I am saying that something like this is both possible and necessary for the paradigm to function.
It's a critical topic that is right at the heart of DDD. I researched up and down on this topic and unlike 'where to use a factory" the DDD community refuses to go into even as much detail on this topic than I just did with my half baked comment.
I am not trying to "complete" DDD here. I think DDD should largely be consigned to the trash heap.
> This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?
This is a very important question. I will bite and give a guidance because no one is willing to give advice here. If you development team grows over 10 SE. Then you need to split it. So your bounded context has a complete team on it becoming experts in the subject as well as experts in the software implementation. Each team should be able to work independently of other teams.
I've been replying to you in other threads. There is no answer to this question, it is an art. I understand that it's frustrating, but that doesn't make it any less true.
As far as what the goal of the art is - the goal is to avoid linguistic and semantic ambiguities in the ubiquitous language. There is even a section entitled "Recognizing Splinters Within a Bounded Context" where specific examples are given:
* duplicate concepts
* false cognates
If you have truly duplicate concepts across contexts, this is a symptom of the lines not properly being drawn, and perhaps a new, shared context is missing.
False cognates are the bread and butter of bounded contexts though - these occur when two areas of a business use the same exact word for something, but they mean slightly different things _depending on the business context_. The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge, so if one "Charge" model were created, it would be more complicated because it had to worry about all the different ways that the teams use it. And even worse, sometimes they are used in _conflicting ways_. That is a semantic collision, creating ambiguity in the model.
This is what a bounded context is meant to address. Each department gets its own model, each with its own version of Charge. The code and data is fit for the specific business purpose it's serving, instead of having a one-size-fits all model that gets the job done, but is more complicated to use in all contexts.
Honestly curious, have you read the book? I still don't think it will give you what you're looking for in terms of a prescriptive formula for "doing DDD right," but there's quite a bit of guidance in there.
I'm familiar with the idea that "duplicate concepts" indicate that you should have a separate bounded context, from, I think Martin Fowler's blog? This is actually partly what I was referring to when I said hand waving.
It's conceptually similar to answering the question "How do I know where the borders of Germany lie?" by saying "ask the first person you see if they speak German".
It also conflicted with a process I followed, which was to essentially create a team glossary and agree to semantically disambiguate terms which had multiple different meanings (e.g. linux user/website user instead of just user) and even just "ban" the usage of terms which got overloaded too much.
(I discovered that semantic collisions didn't just present problems in code, it often prevented you and your team from having coherent conversations).
This could, of course, then put everything we touched as a team into the same bounded context. Or not...?
>The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge
It sounds like they're essentially saying (not explicitly, but via assumption), that your software should follow conway's law.
Nonetheless, this example screams "bug alert" to me, since assumptions made by departments (and, as a consequence software systems) about what they should care about are where the really nasty bugs lie - frequently driven by misunderstanding between departments about terms (e.g. what counts as a user).
Why is that a bad thing?