Monthly Archives: April 2013

Your Customers Are Talking About You

Your customers are talking about you, are you listening?

Your customers are talking about your product on social media. They are telling their friends and followers about their experiences with your product. They are praising or complaining about your product or service, and even highlighting bugs. Listening to these conversations can help you make your product better.

A friend told me a story. He was in a meeting and TweetDeck chirped with an update, a customer was complaining that the site was down. He immediately checked his email, there weren’t any alerts. So, he sent an email to the operations leader. The email server was down. It turned out, the whole data center was offline, including the alerting system & email server. A tweet from the customer was the first time he heard of the outage.

Now, this is an extreme example, but if your customers are having troubles with your product, its likely some of them are complaining online.  With Customer-Driven Quality, we monitor these channels to find opportunities to improve our product & tests.

Continue reading

Model-Based Testing – An Example

Model-Based Testing techniques allow the automatic creation of test cases, creating huge volumes of tests, for practically free.  However, many of these tests are shallow, and provide only a cursory check of your system.

I learned about Model-Based Testing from Harry Robinson, hearing his talk twice., once when he worked at Google and once when he worked at Microsoft.  One presentation demonstrated the use of Model-Based Testing for testing Google Maps routing, the second, Bing Maps.  Go figure.

In those examples, MBT was shown to be an effective method of performing a gross check of map routing, a very difficult application to test, as Apple learned the hard way.

The basic strategy is to retrieve routes between two locations (pair-wise between US Zip code centroid locations).  The routes are obtained automatically, but checking for correctness is the difficult problem. Harry showed several oracles to check the routes for reasonableness.  An oracle is an algorithm or method for determining the success criteria of a test.

Screen Shot 2013-04-23 at 3.17.37 PM 

Continue reading

GTAC 2013

Google hosted GTAC 2013 this past week, with several very interesting presentations.  The videos are currently only available in one long stream. Here are the GTAC videos, along with the time stamp when the particular talk begins.

Day 1

  • Time: 00:16:30, Keynote, Ari Shamash, Evolution from QA to Test Engineering
  • Time: 1:06:00, James Waldrop, Testing Systems at Scale at Twitter
  • Time: 2:20:00, David Burns & Malini Das, How Do You Test a Mobile OS?
  • Time: 4:04:15, Igor Dorovskikh & Kaustubh Gawande, Mobile Automation in Continuous Delivery Pipeline
  • Time: 4:47:00, David Rothlisberger, Automated Set-Top Box Testing with GStreamer and OpenCV
  • Time: 5:03:00, Ken Kania, Webdriver for Chrome
  • Time: 5:17:45, Vojta Jina, Karma – Test Runner for JavaScript
  • Time: 5:32:50, Patrik Hoglund, Automated Video Quality Measurements
  • Time: 5:47:10, Minal Mishra, When Bad Things Happen to Good Applications
  • Time: 6:34:00, Tao Xie, Testing for Educational Gaming and Educational Gaming for Testing
  • Time: 7:17:20, Simon Stewart, How Facebook Tests Facebook on Android

Continue reading

Root Cause Analysis for Software Problems

It happens. To the best effort of your developers and test team, bugs sometimes escape to customers. Being a quality or test leader, its important to handle these situations in a way to learn and improve. This template for root cause analysis has worked very well to help the team learn about the escape and make improvements

To help guide the investigation, I’ve developed the following set of questions which help guide a comprehensive root cause. These questions help guide the review in more productive direction than finger pointing.

Describe Problem:

Describe symptoms and consequences of the problem.This description should be a summary with enough description for readers to become familiar with the issue.Include a reference to the trouble ticket or other documentation your organizing used to track such issues. Usually the Quality team or leader prepares the description.

Where was the problem introduced?

Describe the phase in which this was introduced, i.e. requirements, design, code, build, etc.

Describe the root cause of the problem:

For the phase where the problem was introduced, describe what actually happened. For example, the requirements could be missing, incorrect, unclear. Designs might not have provided for error handling, or consider the performance requirements. Coding errors may be logic, missing table entry, incorrect logic (“and” instead of “or”), etc.

The root cause is often difficult to determine. One tool to use is the “5-whys” approach. This is where you ask the question “why” 5 times or until you get to the root cause.

In many cases, there will be multiple causes, that combined together, contributed to the escape to production. In these more complex scenarios, the “fishbone diagram” is a useful tool.

Determining the root cause is the hardest part of this process. Often, the people involved are defensive or may not be oriented towards digging for root cause (especially if the problem is already fixed).

Continue reading