Manual Tests or Automated Tests? The answer is “yes”.

A line from the new movie Hidden Figures reminds me of an adage that we’ve developed.  When the question is: “Should we do this or that?”  The right answer is usually, “Yes, do this and that.”.

The same is true for automated tests or manual tests.  Unless a project is completely a one time use, throw away, it will almost certainly benefit from developing some tests that are repeatable and executed automatically.  On the flip side, any project that has real humans as users should have real humans making sure it works.

The line from the movie was from John Glenn, “You know you can’t trust something that you can’t look in the eyes”.  He was asking Katherine Johnson to double check the calculations that came from the computer.  The computers did the math faster and with more accuracy than the humans, yet it still takes a smart human to make sure its right.


Eliminating biases in A/B testing

A/B testing is a powerful customer-driven quality practice, which allows us to test a variety of implementations and find which works better for customers.  A/B testing provides actual data, instead of the HIPPO.

The folks at Twitch found that the users in the test cell had higher engagement than the control group. They found that this higher engagement came from factors other than the new experience, which might cause a cognitive bias in their results.  Factors like the Hawthorne effect and new users break the randomness for the experiment.

They adjusted the data to reduce the impact of these effects, and provided a great case study on how they did it


Triage for Static Analysis issues

“This new tool looks great, but who is going to triage the results and open the bugs?”

Many engagements with a static analysis tool often start this way.  Sorting through the results to determine which are real, and which are false positives, is a task that someone has to do. That task takes time away from the other activities that person would have been doing anyway and often this perceived workload will kill the idea before it starts. There is another way.

Here is a quick tip to move past the question on who will triage the issues: Automate the triage process. Automatically assign the issues to the most likely person to fix it – the author of the code.

The static analysis tool will provide the defect type and some meta-data about the defect like a severity level. It will also provide the source path, filename, and line of code where the problem was found.

A good heuristic to use in automatically assigning the bugs is the last person to check in a change to that file.  In all likelihood, that person created the error.  In the teams where I’ve implemented this method, we generally see a 80-90% accuracy for the triage. That is 80-90% of the time, the static analysis tool flagged an issue caused with that check-in.

For the balance, we ask for the developers to investigate the issue anyway. I like to use the analogy of picking up trash while on a walk.  You didn’t leave that trash, someone else did. However, by picking it up, you leave the trail a little bit nicer for the next person.  Think about fixing legacy bugs like this. If you are working in that file, and you find a bug, go ahead and fix it.

Implementing this process does require a small bit of scripting. First, the script retrieves the list of new issues from your scan (daily, weekly, whatever frequency you run the scans). Then, for each new issue, lookup in the source control system the author that made the last check-in.  Next, create a record in the bug tracking system (or update the field in the defect record of your tool – if it has a tracker). Finally, send a notification to the author that they have a new assignment.

This assignment process has been successful for several teams, it might help you.  Give it a try.  Of course, track the success rate and make any needed adjustments.

Good luck.

Change Leadership for the Quality Team

Often, to improve quality, we need to change the way things happen upstream.  We need to influence other groups to change the way they do things.  Leading change can be boiled down into 4 steps:

  1. Build the case for change
  2. Plan the change
  3. Test the change
  4. Rollout and make adjustments

I’ll be presenting this model at the Pacific Northwest Software Quality Conference on October 17th, 2016.  The great folks at Quardev host the monthly meetup in Seattle of the Quality Assurance Special Interest Group, where I was able to preview the talk.  They graciously provided the video:

Links to the supporting materials, references, and videos are available on the blog here.