> Touca started due to frustration with the status quo of poor developer experience in writing, maintaining, and running software tests.
> Despite growing usage, we had not yet found product-market fit when the down market started. We failed to secure more funding. Our revenue declined as some of our largest customers downgraded to cut costs. In the process to convert our nice-to-have product to a must-have, we burnt out.
To clarify, you were competing with: when somebody opens a pull request on GitHub, GitHub actions (or a webhook to Teamcity or Jenkins or whatever) runs "cargo test/npm test/dotnet test/whatever", right?
In a perfect world, what did you expect adoption to look like? How would you get people off of the flow I just described?
Touca (github.com/trytouca/trytouca) gives fast feedback (via email or PR comment) for each PR you create that shows you how the behavior and performance of your software has changed compared to a previous trusted version.
We were not competing with or replacing GitHub Actions so the flow that you described wouldn't change. If you were to use Touca, you would continue to run "cargo test/npm test/dotnet test/whatever" that are likely running unit tests and integration tests.
Touca tests are more like snapshot tests and property-based testing in that you wouldn't specify expected values. You could run them locally or on CI or on a dedicated machine.
You would write high-level regression tests using our SDKs in Python, TypeScript, C++, or Java that would let you capture values of variables and runtime of functions for any number of test cases. The SDKs would submit that information to the our remote server that would compare them against the baseline version, and visualize and report any differences.
> Touca (github.com/trytouca/trytouca) gives fast feedback (via email or PR comment) for each PR you create that shows you how the behavior and performance of your software has changed compared to a previous trusted version.
Like SonarQube?
> If you were to use Touca, you would continue to run "cargo test/npm test/dotnet test/whatever" that are likely running unit tests and integration tests.
My company can barely invest in 80% unit test coverage. We have engineers writing tests to game coverage with no assertions. How is an organization expected to invest in even more time to write another type of test?
No. Instead of linting or static analysis, we tell you when a recent code change is causing your software to behavior or perform differently when handling a particular test case. Like Cypress but for software workflows that don't have a web interface or are not easy to test end-to-end.
> My company can barely invest in 80% unit test coverage. We have engineers writing tests to game coverage with no assertions.
Organizations are already investing in this other type of test by assigning QA teams to write and run manually or using software test automation tools. Touca offers a developer-friendly alternative to these tools that helps orgs save costs. It's a shift-left testing solution that helps engineers get the same confidence that QA teams provide, during the development stage.
From my experience, teams resort to superficial coverage goals because higher-level tests or the test infra needed to continuously run them are too expensive and complicated to run. We wanted to fix that.
> Despite growing usage, we had not yet found product-market fit when the down market started. We failed to secure more funding. Our revenue declined as some of our largest customers downgraded to cut costs. In the process to convert our nice-to-have product to a must-have, we burnt out.
To clarify, you were competing with: when somebody opens a pull request on GitHub, GitHub actions (or a webhook to Teamcity or Jenkins or whatever) runs "cargo test/npm test/dotnet test/whatever", right?
In a perfect world, what did you expect adoption to look like? How would you get people off of the flow I just described?