I have a talk at the upcoming STPCON in Phoenix, called Metrics: Choose Wisely. I’ll be featuring some of the content here as a preview. Please feel free to comment, ask questions here, and by all means, do attend the conference if you can.
At the talk, I’ll be providing a methodology for creating software quality metrics that tie into your business goals. Then, will pick apart some of my work by showing various fallacies with using these metrics. The first fallacy to watch for is survivor bias.
For a quick exercise, think about a medieval castle. What material are castles made of?
Yeah, stone castles are what we think about when we think about castles. In fact, most castles were made of timbers – out of wood. However, today we mostly see the castles that survived for hundreds of years. We just see the stone castles because the wooden castles have burned or rotted away.
For an example where survivor bias may impact conclusions on a metric, consider this chart which shows the priority of open bugs.
Someone may draw the conclusion that quality is pretty good here. The only 1% of the bugs are of the highest priority, and the distribution looks normal. However, the underlying data is only of the bugs that are still open. This team may, or may not, deliver software that has many high priority bugs, but they fix those bugs quickly. We should look at a distribution for all of the bugs, not just the open bugs.
Another example where survivor bias exists is with customer satisfaction surveys. Getting a sense of quality from your customers is vital, but you have to remember that the survey results that you see are the results from the people that completed your survey. The survivors. You don’t see the results from people who gave up on the survey. This is why I like to use very short surveys, like the Net-Promoter Score. The shorter the survey, generally the more survivors you have.