Development practices like Test-Driven Development can lead to high levels of Unit Test coverage. Sometimes, the team reaches 100% statement coverage. How can testers add value when the code is “fully covered”?
Example of a project with 100% Unit Test Coverage
I provide a few examples in this Sticky Minds article:
Review unit tests for missing cases. Just because the code is fully executed doesn’t mean all of the cases are used which might cause the code to fail.
Review unit tests against the product requirements. The unit tests will check that the code works as the developer intended. Sometimes, you can verify that the functionality meets the requirements.
Review the unit tests themselves. Having 100% coverage for unit tests means that the code was executed during a test, not that the tests are good, or even test anything. Check that the assertions are valid and useful.
Review Exception Handling. Unit testing is a good way to test the exception handling because the developer has full control of the mocked environment and can simulate the exceptions. These exceptions are difficult to inject in other types of testing, so making sure the unit tests do a good job is important.
Other ideas? Great, please check out the article and leave a comment either here or there.
In the US, today is Pi day. March 14, or as we write it with numbers 3.14. Happy Pi day everyone!
However, in Europe, they tend to write out dates starting with the day, then the month. So, today is 14.3 over there. No-where near Pi. Instead, the closest to Pi day in Europe would be 22/7 (July 22nd), where 22/7 is a common approximation of Pi.
Which is more accurate?
Testing both approximations is pretty easy with Wolfram Alpha. The error in the approximation is determined by taking the absolute value of the difference between Pi and the approximation. So, the following screen shows the result, asking if the error in the US version is greater than the European version of Pi day:
Comparing the US version of Pi day (3.14) to the European version (22/7) with Wolfram Alpha
Europe wins this time. 22/7 is a better approximation than 3.14.
Me: “This week, we ran 250 test cases. 245 passed and 5 failed.”
Flip the slide.
Me: “Here is our bug find/fix graph. This week, we found 30 bugs and fixed 42. This is the first week where fixes outnumbered finds.”
VP: “Wait, go back one slide. How come you only had 5 tests fail, but found 30 bugs?”
On this project, we created test cases based on the written requirements, and could show traceability from requirements down to test results. These test cases were intended to show that we met the customer’s requirements – and the customers had full access to this data.
In addition to the official test cases, we also ran many exploratory tests. The testers spent time experimenting, not following a direct script. In terms of finding bugs, this exploratory approach was much more productive than following the script.
One reason, the scripts were written from the requirements, the same document that influenced design and code. We should have been surprised if the prepared test cases found any bugs. Professional, smart, testers following their nose found many of the issues that would have frustrated customers.
(these events happened a long time ago, on a product where we had an annual release cycle and 3 months of system test – that feels like forever ago)
A line from the new movie Hidden Figures reminds me of an adage that we’ve developed. When the question is: “Should we do this or that?” The right answer is usually, “Yes, do this and that.”.
The same is true for automated tests or manual tests. Unless a project is completely a one time use, throw away, it will almost certainly benefit from developing some tests that are repeatable and executed automatically. On the flip side, any project that has real humans as users should have real humans making sure it works.
The line from the movie was from John Glenn, “You know you can’t trust something that you can’t look in the eyes”. He was asking Katherine Johnson to double check the calculations that came from the computer. The computers did the math faster and with more accuracy than the humans, yet it still takes a smart human to make sure its right.