Hacker News new | past | comments | ask | show | jobs | submit login

I can sympathize with the authors love/hate relationship with tests, but I can’t help feeling like it’s because we as developers so often test the completely wrong things.

I don’t typically write tests, but they do make sense for a few cases (specifically end to end tests that look for well defined outputs from well defined inputs). I was inspired by Andreas Kling’s method of testing Ladybird, where he would find visual bugs, recreate the bug in a minimum reproducible example, fix the bug, then codify the example into a test and make sure the regression was captured in his test suite[0]. This led to a seemingly large suite of tests that enabled him to continue modifying the browser without fear of regressing somewhere.

I used this method of testing while I was writing a code highlighter that used TextMate grammars. Since TextMate grammars have a well defined output for some input of code + grammar, I was able to mimic that output in my own code highlighter and then compare it to TextMate’s output for testing purposes. I wrote a bunch of general purpose tests, then ran into a bunch of bugs where I would have mismatched output. As I fixed those bugs, I would add the examples to my test suite.

Anyways, my code highlighter was slow, and I wanted to re-architect it to speed it up. I was able to completely change the way it worked with complete confidence. I had broken tests for a while in the middle of the refactor, but eventually I finished the refactor. As I started to fix the broken tests, there was a domino effect. I only had to fix a few tests and that ended up automatically correcting the rest. Now, I have a fast code highlighter and confidence that it’s at least bug for bug parity with the slow version :)

[0]: https://youtu.be/W4SxKWwFhA0?si=PJs_7drb3zVxq0ub




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: