Monthly Archives: February 2017

The Test Leader

Katrina Clokie has a really good blog post describing the difference between a test leader and a test manager.  The test leader influences testing across the organization without direct positional authority.  I’d suggest one tweak, the test manager role should also include the leadership qualities of the test leader.

Especially in Agile, but applicable in any life-cycle, everyone tests.  Developers should be testing their code, and product managers (or product owners) should be participating in the acceptance testing. Everyone has a role in building high quality, even if the activities are not directly tests.  The test manager can play an influential role in these other groups, in addition to leading their direct team.

I’ve been advocating the role Quality Leader instead of Test Manger to stress the influential capabilities of test managers:

 

Leading from the front

Photo Credit: Olivier Carré-Delisle

 

Bill Gates on Automating Tests

OK, he wasn’t talking specifically about automating tests. But he talk about automating the jobs that can be automated and redirecting the human effort towards things “where human empathy and understanding are still very, very unique.”

His topic was taxing the output of robots and using those funds to train the displaced workers towards those new roles.

“So if you can take the labor that used to do the thing automation replaces, and financially and training-wise and fulfillment-wise have that person go off and do these other things, then you’re net ahead.”

Read the full article, and see the video here. 

Testing Efficiency – A Better View

The ISTQB defines Testing Efficiency as the number of defects resolved over the total number of defects reported.  This is meant to measure the test team by the relevance of the bugs they report.  A low efficiency would imply that the test team is reporting many bugs that are not worth fixing.

This view is pretty limited and simplistic.

A better approach would be to measure the “resolution category” for the bugs that are closed. The bugs, when resolved, are marked with a category like “fixed”, “Cannot Duplicate”, or “Duplicate of another bug”.  The categories can be graphed on a pie chart:

Pie Chart showing Resolved Bugs by Category

Pie Chart showing Resolved Bugs by Category

Now, you can have a conversation about the bugs being reported, and whether improvements are warranted. We had this exact issue in a team that I lead a while back.  We made a few adjustments:

Duplicate – we upgraded the bug tracking system to improve the search function. This allowed the testers to search for duplicates before submitting a new bug.  If they found a bug already, they reviewed it to see if they could add any new information.

Cannot Duplicate – for these, we did bug huddles with the developers. Showing them a demo of the bug before writing/submitting the bug. This practice really helped get the bugs fixed faster, by eliminating the back-forth that sometimes happens.

Business Decision – Many of these were closed by the developers without involving the Product Manager in the decision. We added the PM as the person to “verify” bugs closed with this resolution to make sure they agreed.

Pie chart after improvements.

Pie chart after improvements.

Want to learn more about leadership in software testing? Check out the Software Leadership Academy.

A Corollary to the Red-Bead Experiment, with Salmon

“The results came from the system, not the people.”

This is Dr. Deming’s conclusion to his famous red-bead experiment.  He believes that defects are inherent in any complex system.  The experiment consists of simulated workers scooping up beads from a bucket, and delivering those beads to another.  Most of the beads are white, while some are red.  The red beads represent defects.

The experiment shows that everyone will deliver defects to the finished product, because there are defects (red beads) in the original bucket.  Last fall, I had the privilege to watch a live version of the experiment, conducted by Rex Black, at the Pacific Northwest Software Quality Conference in Portland.  Here is the video, which is well worth the watch:

My corollary comes from sport-fishing.  Living in the bay area, I’m blessed with being able to fish for salmon right outside the Golden Gate Bridge.  I usually fish from a party boat, with 25-40 other fishers.

One ritual is the “jackpot”. When we push off in the morning, everyone puts $5 into a pot. At the end of the day, the person with the largest fish wins the jackpot.  The jackpot winner is usually congratulated about being really good at fishing.

Nice Fish

The corollary comes from the nature of how we fish with all those people on the boat. Everyone uses the same gear, the same bait, and the same technique. The skill for finding fish comes from the captain, who maneuvers the boat to intersect with the salmon school.  From the salmon’s perspective, they see 25 identical baits.

The best fish goes to a random person, selected by the fish. Not based on the skill of the fisherman.

In the book Outliers, Malcolm Gladwell tells us that great success is often random chance (to well prepared individuals who are already experts in their field).

Salmon fishing feels the same way.