A story of exploratory testing – 30 bugs

Me: “This week, we ran 250 test cases.  245 passed and 5 failed.”

Flip the slide.

Me: “Here is our bug find/fix graph. This week, we found 30 bugs and fixed 42. This is the first week where fixes outnumbered finds.”

VP: “Wait, go back one slide.  How come you only had 5 tests fail, but found 30 bugs?”

30 bugs?

On this project, we created test cases based on the written requirements, and could show traceability from requirements down to test results.  These test cases were intended to show that we met the customer’s requirements – and the customers had full access to this data.

In addition to the official test cases, we also ran many exploratory tests. The testers spent time experimenting, not following a direct script.  In terms of finding bugs, this exploratory approach was much more productive than following the script.

One reason, the scripts were written from the requirements, the same document that influenced design and code. We should have been surprised if the  prepared test cases found any bugs.  Professional, smart, testers following their nose found many of the issues that would have frustrated customers.

(these events happened a long time ago, on a product where we had an annual release cycle and 3 months of system test – that feels like forever ago)