Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've used a similar technique to test, generating an expected output, actual output and then diff them.

One trick I found helpful was using JSON to serialize test results instead of unstructured plain text.

Test results stored as JSON are much easier parse and therefore process. You can quickly whip up programs that verify the tests satisfy invariants, diff the tests and filter out expected test changes from unexpected test changes.



When I did it, we skipped the Make part and reinvented it in Python: https://github.com/libfirm/sisyphus

This aspect of expected and unexpected test changes is even more important than the diff part in my opinion. It allows you add failing tests immediately once you get the bug report and you notice if you fix something accidentally.


The Readme of that project could do with a few examples of what tests and successful/unsuccessful output look like. I found the examples folder and still can't visualize what it might be like.

I've been using cram (also written in Python) for a private project and been mostly happy with it: https://bitheap.org/cram/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: