The main takeaway for me is clean modularity in a system with strong decoupling, the modules can be in a single application boundary (a monolith) or across multiple (services or microservices).
The design challenge becomes making sure that your modules can execute within the monolith or across services. The work to be done can be on a thread level or process level. The interfaces or contracts should be the same from the developers point of view. The execution of the work is delegated to a separate framework that can flip between thread and process models transparently without extra code.
This is how I approached my last microservices project. It could be built as one massive monolith or deployed as several microservices. You could could compose or decompose depending on how much resources where needed for the work.
I fail to understand why techniques around these approaches aren't talked about in detail. They have some degree of difficulty in implementation but are very achievable and the upsides are definitely worth it.
This definitely sounds like an interesting approach.
I think however defining these modules and the interfaces between them is the hard part. Part of this work is defining the bounded contexts and what should go where. If I understand DDD correctly this shouldn't be done by «tech» in isolation. It's something that's done by tech, business and design together. This is hard to do in the beginning – and I would argue that it should not be done in the beginning.
When starting out you've just got a set of hypothesis about the market, how the market wants to be adressed and in which way you can have any turnover doing it. This isn't the point when one should be defining detailed bounded contexts, but should instead just be experimenting with different ways to get market fit.
There should be a name for it. If a name is established so people can talk about it easier, it would make a big difference. This is a kind of design pattern, although not an object-oriented pattern.
Could you elaborate on the framework you mentioned for flipping between process and thread models? That sounds interesting. Was it released or just used internally for some projects?
The design challenge becomes making sure that your modules can execute within the monolith or across services. The work to be done can be on a thread level or process level. The interfaces or contracts should be the same from the developers point of view. The execution of the work is delegated to a separate framework that can flip between thread and process models transparently without extra code.
This is how I approached my last microservices project. It could be built as one massive monolith or deployed as several microservices. You could could compose or decompose depending on how much resources where needed for the work.
I fail to understand why techniques around these approaches aren't talked about in detail. They have some degree of difficulty in implementation but are very achievable and the upsides are definitely worth it.