Category Archives: Software Leadership

Bill Gates on Automating Tests

OK, he wasn’t talking specifically about automating tests. But he talk about automating the jobs that can be automated and redirecting the human effort towards things “where human empathy and understanding are still very, very unique.”

His topic was taxing the output of robots and using those funds to train the displaced workers towards those new roles.

“So if you can take the labor that used to do the thing automation replaces, and financially and training-wise and fulfillment-wise have that person go off and do these other things, then you’re net ahead.”

Read the full article, and see the video here. 

Half of the information is better than none at all

My wife and I recently had a conversation which I need to work into my upcoming talk on metrics.  I’m traveling and she is planning to pick me up from the airport. Its a long flight.  She is wondering about flight updates and what time she should show at the airport.

I tell her that I’ll text her if the plane takes off on time, or is delayed.  She initially says that she needs to know if it will arrive on time, not if it departs on time.  Then, immediately realizes that if the plane departs late, it surely will arrive late. So, having that bit of information regarding on time departure does have some value.

Sometimes we let perfect get in the way of progress. A metric can be gamed, or its not comprehensive, or it doesn’t tell the whole story. However, if you understand the underlying process, having half the information may provide some useful information indeed.

Fallacies with Metrics

I have a talk at the upcoming STPCON in Phoenix, called Metrics: Choose Wisely.  I’ll be featuring some of the content here as a preview.  Please feel free to comment, ask questions here, and by all means, do attend the conference if you can.

At the talk, I’ll be providing a methodology for creating software quality metrics that tie into your business goals. Then, will pick apart some of my work by showing various fallacies with using these metrics.  The first fallacy to watch for is survivor bias.

For a quick exercise, think about a medieval castle.  What material are castles made of?

Edinburgh Castle - illustrating that our conception of castles are made of stone.

Edinburgh Castle – illustrating that our conception of castles are made of stone.

Yeah, stone castles are what we think about when we think about castles. In fact, most castles were made of timbers – out of wood.  However, today we mostly see the castles that survived for hundreds of years. We just see the stone castles because the wooden castles have burned or rotted away.

For an example where survivor bias may impact conclusions on a metric, consider this chart which shows the priority of open bugs.

Chart showing open bugs by priority, and illustration of survivor bias affecting conclusions about software quality

Open bugs by priority

Someone may draw the conclusion that quality is pretty good here.  The only 1% of the bugs are of the highest priority, and the distribution looks normal.  However, the underlying data is only of the bugs that are still open.  This team may, or may not, deliver software that has many high priority bugs, but they fix those bugs quickly. We should look at a distribution for all of the bugs, not just the open bugs.

Another example where survivor bias exists is with customer satisfaction surveys.  Getting a sense of quality from your customers is vital, but you have to remember that the survey results that you see are the results from the people that completed your survey.  The survivors.  You don’t see the results from people who gave up on the survey.  This is why I like to use very short surveys, like the Net-Promoter Score.  The shorter the survey, generally the more survivors you have.

The Testing Show Podcast – Making QA Strategic

I recently found a new podcast about testing. The Testing Show podcast. This show appears to come out every 2 weeks, and is a panel discussion hosted by Matt Heusser and Michael Larsen.

The show usually starts with a news segment, where the panel discusses some major software bug that happened in the previous couple of weeks. Then, the panel moves onto a topic for that session. The topic that spurred me to write this was “Making QA Strategic”.

At first, the session sounded like it was going to be a complaint about how testers should be taken more seriously. “We have the knowledge, if they would only listen to us”, is too often a refrain amongst the testing community – delivered as a complaint.

However, that sentiment quickly faded and the bulk of the podcast was about testing professionals giving their advice on how to make the testing team more strategic. Here are a few examples:

Josh Assad: “I try to build partnerships with my customers”. He told a story where he traveled to the customer site and spent a week building relationships and figuring how to optimize his testing to fit into the customers acceptance practices.

Jared Small described the value of his team as staying focused on customers and helping the entire team stay focused on the customer.

Jarad also mentioned that a big part of his role is to stay on top of industry trends and what is happening in the software test community.

Matt Heusser described an interesting model, the Swiss cheese model of risk. No one technique will eliminate all risk, in any economic fashion, but combined efforts for automated tests, manual tests, and production monitoring – when taken together will greatly reduce overall risk.   I’m not sure that I can describe how this relates to Swiss cheese, but it sounded good. Maybe I should eat lunch before listening to podcasts.

Matt also described the SWOT (Strengths, Weaknesses, Opportunities, and Threats) model, as applied to doing a gap assessment of the team. Identifying the gaps between your existing team and the ideal is a great way to determine improvement goals.

I also really liked Erik Davis’s approach to innovation. He stresses the importance of trying new things and keeping the ones that work. I believe testing & quality is a field with tons of opportunities for innovation. As I like to say, there is literally an infinite amount of testing that could be done, but a (very) finite amount of time.

Take a listen if you enjoy podcasts. I’ve subscribed and will be browsing their archives. Another cool feature, they have the full transcript on the web.

Well done.