Hacker News new | past | comments | ask | show | jobs | submit login

The Travis smoke tests against PRs are only done in a single configuration, so are relatively cheap.

I think it would double expenses because our CI runs at capacity. To do parallel builds we would have to contract for double the compute resources. (With a different purchase structure for our CPU time you may be right about that, but not sure).




If cost is a factor then you could use batch testing as your default mode of operation and not bother testing individual commits at all unless a batch fails.

Using batch testing as a default would both reduce costs and increase merge speed, assuming that your commits usually pass testing.

The downside of using batching as a default is that it wouldn't test every commit in isolation. That means it wouldn't necessarily be safe to roll back to a particular commit if that commit was tested as part of a batch. E.g. if patches A, B and C were tested together, then it's not certain that patch A by itself would pass the tests.


That's a good point that I didn't consider. You could squash commits to get that property back, but the results wouldn't carry much meaning.


If your test-suite pass rate is significantly greater than 70% (and statistically independent), you can gain throughput by grouping PRs into batches.

For example, if the pass rate is 80%, then by batching 3 PRs together there's a 51.2% chance to pass with all three, which would save 2 test runs. and a 48.8% to fail, which would cost 1 extra run. That's a pretty substantial increase in throughput.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: