The difference between accidental and essential complexity cannot be underscored enough. Removing accidental complexity, even if it flies in the face of "best practices," can be very powerful and appropriate as long as it's understood where you are intentionally coloring outside the lines and the conditions under which you'll need to revert to best practices for scaling up.
An example: I attended a conference presentation in which the presenter discussed dissecting the implementation of Identity Server into a dozen sub-services with half a dozen backing data stores with Redis caching layers and an Event Source model for propagating inserts and updates of the underlying data. This would be a prime example of accidental complexity gone wild if you built this just to have a multi-container microservice -- unless your single-signon service as a whole needs to support 100MM+ users, in which case this is essential complexity, not accidental.
Reducing accidental complexity but being mindful of how it could become essential complexity under certain conditions in the future is the mark of a wise architect, IMO.
An example: I attended a conference presentation in which the presenter discussed dissecting the implementation of Identity Server into a dozen sub-services with half a dozen backing data stores with Redis caching layers and an Event Source model for propagating inserts and updates of the underlying data. This would be a prime example of accidental complexity gone wild if you built this just to have a multi-container microservice -- unless your single-signon service as a whole needs to support 100MM+ users, in which case this is essential complexity, not accidental.
Reducing accidental complexity but being mindful of how it could become essential complexity under certain conditions in the future is the mark of a wise architect, IMO.