It would be great to tie this into a release pipeline, where the release process is actively keeping an eye on failure rates of that service, so that bad deploys could be halted or rolled back automatically.
I was thinking this could work really well when using production integration tests. A percentage of that traffic can be dynamically routed to the newly running services, allowing the release pipeline to ensure the service is functioning correctly before routing any real users.
It would be great to tie this into a release pipeline, where the release process is actively keeping an eye on failure rates of that service, so that bad deploys could be halted or rolled back automatically.
I was thinking this could work really well when using production integration tests. A percentage of that traffic can be dynamically routed to the newly running services, allowing the release pipeline to ensure the service is functioning correctly before routing any real users.