So, a concrete example from my recent past: a system designed to allow agents in the field to submit contracts, and have them crop up in an ancient legacy system.
This had a single purpose, but it was structured as several discrete apps. I'm going off memory here as I didn't work across all the apps, but it looked like this:
1. a single-page Javascript app (Angular, CoffeeScript, Nginx, static site)
2. a web service to accept the contracts (Ruby, Sinatra)
3. a web service to pre-process the contracts and queue them for insertion into the legacy system (Ruby, Grape, Amazon RDS)
4. a service to de-queue the contracts and insert them into the Windows-based legacy system (.NET, SQL Server, IIS)
There were many, many advantages to this structure:
* each individual component[1] was trivial to understand - in the order of hundreds of lines of functional code
* we could choose the technology appropriate to the job
* we only had to use Windows for the bit that interfaced with the legacy system[2]
* the only technology choices that spanned the entire stack were HTTPS and UTF-8
* status of each individual component was available through HTTP requests; most made RSS feeds of their activity available for monitoring
* we could easily experiment with new technologies as they emerged, without changing or breaking All The Things
Some caveats:
* we had a highly skilled dev team, with experience across a wide range of technologies and platforms
* 'the business' understood what we were doing, and why - in fact, the whole purpose of the project was to transform our previously monolithic architecture in this way
* log collection (using Splunk) allowed us to track the progress of individual contracts through the system, and alerted us when one stalled (e.g. was being rejected due to formatting issues)
[1] Except for the last one, because of the ludicrous complexity introduced by interfacing with the legacy system. But all of it except for the legacy interop was easily grasped.
[2] Not knocking .NET or C# here; both are pretty good these days. But the Windows ecosystem is just not as developer-friendly as *NIX.
This had a single purpose, but it was structured as several discrete apps. I'm going off memory here as I didn't work across all the apps, but it looked like this:
1. a single-page Javascript app (Angular, CoffeeScript, Nginx, static site)
2. a web service to accept the contracts (Ruby, Sinatra)
3. a web service to pre-process the contracts and queue them for insertion into the legacy system (Ruby, Grape, Amazon RDS)
4. a service to de-queue the contracts and insert them into the Windows-based legacy system (.NET, SQL Server, IIS)
There were many, many advantages to this structure:
* each individual component[1] was trivial to understand - in the order of hundreds of lines of functional code
* we could choose the technology appropriate to the job
* we only had to use Windows for the bit that interfaced with the legacy system[2]
* the only technology choices that spanned the entire stack were HTTPS and UTF-8
* status of each individual component was available through HTTP requests; most made RSS feeds of their activity available for monitoring
* we could easily experiment with new technologies as they emerged, without changing or breaking All The Things
Some caveats:
* we had a highly skilled dev team, with experience across a wide range of technologies and platforms
* 'the business' understood what we were doing, and why - in fact, the whole purpose of the project was to transform our previously monolithic architecture in this way
* log collection (using Splunk) allowed us to track the progress of individual contracts through the system, and alerted us when one stalled (e.g. was being rejected due to formatting issues)
[1] Except for the last one, because of the ludicrous complexity introduced by interfacing with the legacy system. But all of it except for the legacy interop was easily grasped.
[2] Not knocking .NET or C# here; both are pretty good these days. But the Windows ecosystem is just not as developer-friendly as *NIX.