Too much of a good thing: the trade-off we make with tests | nicole@web
In this article, Nicole discusses the idea of calculating (or estimating) the cost of a bug versus a cost of writing a test. Then using that metric to drive (or at least be part of the discussion) the decisions about how much tests to write.
She acknowledges the difficulty of measuring some of these, especially the indirect impact of bugs (ie. customer churn).
It’s well known that targeting 100% code coverage is a bad idea, but the question is why, and where should we draw the line?
The way that tests help you solve problems is by mitigating risk. They let you check your work and validate that it’s probably reasonably correct (and if you want even higher confidence, you start looking to formal methods)
If you aim for 100% code coverage, you’re saying that ultimately any risk of bug is a risk you want to avoid. And if you have no tests, you’re saying that it’s okay if you have severe bugs with maximum cost.
Discussion in Lobste.rs: https://lobste.rs/s/1yx0wg/too_much_good_thing_trade_off_we_make_with
Normally, 100% of something is all of it, the most that you can have. And it’s true that once you have 100% coverage, you can’t have more line coverage. Still, there’s a good chance you can improve your test suite if you so desire–somewhere in those lines of code, there’s something where simply executing each line doesn’t give you enough coverage, where you have to think really hard about different test cases that go through the same lines but differ in other ways that can hide bugs. [ - - ] The place I’m most worried about bugs isn’t the 10% of lines that aren’t covered (though I would like to push coverage closer to 100%)–it’s finding the bugs in the core of the engine.