Hacker News new | past | comments | ask | show | jobs | submit login

So I got in a discussion with someone why we need to test post-deployment. I was like because your environments are different you want to eliminate a failure point even if you've tested at build. You can make everything as closely similar to each other as possible but you want to eliminate the failures of a bad configuration, schema or integration. I once was at a place where someone deployed something and just let produce malformed data because a schema wasn't applied to the transformation process of the workers. You know what would have solved that? Pos deploy testing. How does is this in an automated pipeline, you automate the test.



With risk of tooting my own horn too much, this is exactly why I started my company https://checklyhq.com

We approach it a bit different: we blur the lines between E2E testing and production monitoring. You run an E2E test and promote it to a monitor that runs around the clock.

It's quite powerful. Just an E2E test that logs into your production environment after deploy and then every 10 minutes will catch a ton of catastrophic bugs.

You can also trigger them in CI or right after production deployment.

Big fat disclaimer: I'm a founder and CTO.


I've played with this concept a while back when I noticed the E2E tests I was writing closely matched the monitoring scripts I would write afterwards. At one point I just took the E2E test suite, pointed it at production and added some provisions for test accounts. Then I just needed to output the E2E test results into the metrics database and we had some additional monitoring. It's a kind of monitoring-driven-developent and as with TDD it's great for validating your tests as well.


This falls in line with my current worldview as well.

Many classes of automated test ("regression", "smoke", "e2e", etc.) are the same test and the difference is that values like the application location or maybe the expected data are different. To your point, iirc New Relic's synthetic monitoring was using Selenium under the hood. If you take this approach and you're tagging tests, you can have the [CI/CD system] use the tags to run specific sets of them at specific steps or times with the desired inputs.

But, some of that requires a coherent strategy for test data, which seems to be a common pain point for orgs.

[Edit: To be extra clear, not all types of tests fit this. You obviously don't need to run unit tests against your prod app. And, writing/managing things like API tests in a tool like Cypress isn't necessarily a good idea.]


Sounds in concept exactly what we are doing, but then "as a service". We let you manage it from your code base (or UI if you want) and take care of the metrics ingestion, dashboarding and alerting.


There are a lot of tools that do it especially Jenkins. Long as you have deployment webhooks on both Jenkins and CI/CD pipeline it is flawless.


It amazes me that there's a drive to make deploys as seamless as possible to avoid customer impact, but nobody wants to invest in systems integration testing in order to make sure that what they're actually serving is indeed correct. Most of the time, the "acceptance test" is verifying that it renders and the basic behaviors of the page operate as expected. It is insulting to the people and teams that actually spend the time to do what is right.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: