Category Archives: Customer Driven Quality

Improving Test Practices After Deployment

The last stage of the Customer-Driven Quality life-cycle is the support phase. Each of practices described in this series of posts were developed with successful software product, which have multiple upgrades and iterative development. As such, the support phase is not the final stage of development, but is the first stage for the next iteration. The support phase is provides opportunities to learn about customers, learn how to improve your testing practices, and prepare for future development iterations.

Customer Support phase of Customer-Driven Quality life-cycle

Development Team Support

When we develop and release new features of significant complexity, a very useful process is to include the development and test team on the front line for customer care. This practice helps by supplementing the customer care team, as the call load is likely to be greater for a new feature that is not yet understood. By putting the developers and testers in the support role, they get that direct interaction with customers which helps them tweak the design.

Analytics & Feedback

In addition to the feedback channels mentioned above, the support team is an excellent source of knowledge. Since they interact with customers as their main task, they are able to give customer focused insights to the development and test team.

For example, records of support calls and tickets should be tagged with meta-data that describes the area of the product and the type of call. These tags allow us to perform a Pareto analysis on the calls/tickets and study in more depth the root causes behind the most frequent (or longest duration) calls.

Root Cause Analysis

If customers are experiencing problems with your software, you have existence proof of gaps in your development and test practices. Each time this occurs, it’s a great opportunity to learn how to improve.  Root cause analysis for software helps improve the development and testing practices of your team.

Social Media

Interacting with customers through social media, like Twitter and Facebook, were mentioned earlier as way’s to build empathy with customers and discover their definition of quality. Likewise, it’s a great way to interact with customers in a support capacity. Customers will frequently post their problems to their friends. Replying directly, proactively, and helping solve the problem usually results in a positive experience for the customer. Most of the times that I’ve used this, the customers are very pleasantly surprised to receive a direct response from someone on the development team.

See more practices for involving customers to improve quality in the Customer-Driven Quality page.

 

Customer-Driven Testing

Customer-Driven Testing

Customers can and should strongly influence your testing strategy & test plans. This post described Customer-Driven Testing, which is part of the Customer-Driven Quality series, focused on the testing phase of the life-cycle.

Software Life-cycle view with the Customer-Driven Testing phase highlighted.

Analytics Driven Test Plans

Your product probably has some type of usage analytics, tracking the features used. For web applications, the user experience design team most likely is tracking user flow through the application. Your product may have a “user audit log” or some way to track usage of the program. Your test teams should learn how usage is tracked, and use that data to improve your test plan.

One example where a team found great insights to help with testing in the usage data happened in the telecommunications industry. I was leading the test team for a network management system. This system had about 150 different features. The users were customer support agents, either diagnosing their customer’s network, or provisioning new service. The user activity log tracked usage for the purposes of auditing the quality of the customer support agents.

This system was deployed in a secure data center. We asked our customers for a dump of the user activity log, for a 30-day period. We aggregated and analyzed the logs to learn the relative usage of each feature. (Microsoft Excel Pivot Tables is your friend for this type of analysis)

We learned that 92% of all the usage of our product happened in 3 of the features.  This was very interesting and illuminating. The top 2% of features accounted for 92% of the usage.  We made sure that our testing for these 3 features was very thorough, and automated.

The usage data was also very useful to inform our risk-based testing strategy.

Rolling Deployment

When testing is complete, and the product is ready to deploy to customers, you may want to control the speed of that deployment to limit the risk that any bugs escaped your testing.

In the first day after fully deploying our application, we see several orders of magnitude more product usage than in the entirety of our testing cycle. To make sure we have a smooth deployment, we deploy in stages. This helps us ensure we didn’t miss anything in our development process.

We deploy to approximately 10% of our customer base the first evening. Then, we monitor the feedback closely from these customers, looking for any new or unique issues being reported by these customers. The balance of customers receive the new release a couple of days later. This process gives us the chance to correct any bugs that were missed by the test team.

We’ve also been able to use a rolling deployment for software that is distributed, and deployed by customers. We can throttle the distribution to gain the same impact. One way to control distribution is to post your software on the server, but control notification to your customers.

Performance Testing in the Real World

The majority of our performance and scalability testing happens in the lab. Our test environment is built to allow repeatable tests, with diagnostic tools, and has features to help with the test team productivity. These characteristics help with testing, but the real world is messier.

Performance and scalability testing should be supplemented and calibrated by using a remote testing service that more closely represents the customer experience. The remote testing service that we use executes test scripts from many locations across the world, which provides information on how our application is running from our customer’s perspective, not just from inside our firewall.

If an external service provider is beyond your budget, you can use free cloud computing services and deploy your test scripts in the cloud.

Test with customer data

Many applications need to use test data. Often, the customer’s data is not as “clean” as our test data. Customers add and delete records over time. They upgrade from version to version, and migrate from one computer to another. Over time, these activities can add complexity to the database structure like dangling pointers.

If possible, legal, and ethical, try to use actual customer data in your tests. You will need to make sure the use is legal through the End User License Agreement. Going beyond legal, you should also make sure its ethical to use customer data. I find two key principles in using customer data: explicit permission and a fail-safe way of protecting customer data.

Explicitly asking permission is a good practice, even if use is allowed by the license agreement. Out of the hundreds of people that I know personally, only one actually reads those agreements, and she is a lawyer. Don’t assume your customer knows the terms of the license agreement.

In using customer data, I’d also recommend putting extra controls in protecting their data & privacy. These controls should positively protect their data in a fail-safe manner. Practices like password protection, keeping data behind the firewall, and physical security are good, but its still possible for the data to leak out. Obfuscating private data provides an assured protection of their data, and may not impact your ability to test.

Obfuscating private data can be accomplished by a script or program to substitute text, replacing actual data with random or scrambled representations. Your program should preserve the structure of the data, but change the contents. For example, email addresses, phone numbers, names, and addresses have structure that should be preserved. (for example, account@host.tld)

Depending on your application, your tool may need to be smart enough to keep data coherent. For example, an application that I worked on did a coherence check on addresses, making sure the city & state matched the zip code (postal code).

When is testing finished?

One question that comes up very often in software testing, when are we finished testing? The answer is sometimes based on the exit criteria being met, or the project running out of time. The customer-driven testing way of determining completion is when the customer says so.

If your team is tracking customer feedback, perhaps with a 1-5 star rating system, you can track this feedback and declare the project only “done” when the customer rating is 4-stars or better. The development and test team would stay intact until customers are satisfied with the results.

5-star rating for apps

Alpha and Beta Test

Alpha and Beta tests are the classic software development practices for including customers into your testing program. The test team should be active participants in these programs, and monitor the effectiveness.

See more customer-driven practices or continue to see how to improve your testing after release to customers.

Build in Quality for Customers

Once the product is defined, the development team designs and codes the software. This is called the build phase of Customer-Driven Quality. In this phase, the development team can use practices like A/B testing to test alternate designs.

Build phase of software development life-cycle

Testing Alternate Designs

Making design choices often face the same issues as in the product definition phase; the choice is often influenced heavily based on the opinion of the most authoritative, or influential, person in the decision.  Instead of relying on one person’s opinion, customers can vote with their behavior using A/B testing.

In an A/B test, two designs (or more) are presented to customers and a tracking metric is used to test which design better helps the customer succeed with the desired behavior.  For example, Google famously tested 41 different shades of the color blue on web links to find the best color. In this case, the best color was chosen by how frequently customers clicked on an advertisement link.

Personalized Development

We have several methods for developers to interact directly with customers during the implementation phase. These boil down to putting a name, face, and engagement with actual customers to build the right product in the right way for our customers.

The Adopt a Customer approach has a developer choosing a small group of customers. The developers then build the feature collaborating directly with the customers. The developers have access to customers to ask questions, give frequent informal demonstrations, and brainstorm ideas. This process provides focused problem solving, rather than guessing intent from the requirements (or asking HIPPOs).

We maintain a list of customers willing to provide frequent interactions, through the Inner Circle program. Customers opt in to this program, and are available for consultations.

Default Behavior

Behavioral economics, and web analytics, have shown that when customers are presented a choice, they quite often choose the default selection.  This behavior should be considered when making design decisions. For example, a user interface design where the customer should choose from multiple options (State, for example), should not pre-select a default, especially if the list if alphabetical. I’ve seen customer data where Alaska has an unusual high number of customers. This is because, I believe, Alaska is the default selection for customers who just click through that screen.

Reviewing the default options is a specific line item in the design review checklist.

Build with Customer’s Platform

Many self-inflicted errors come from cross platform development. To the degree possible, this should be avoided. The product should be built on the platform used by customers.

In the current state of web application development, most developers prefer to develop with Chrome, because of the superior development and debugging plug-ins. These plug-ins were developed by developers, for developers (talk about Customer-Driven Quality). However, most customers use Internet Explorer as their browsers. Ideally, the web UI should be developed with Internet Explorer, or at least tested by the developers before they check in.

Continue to Customer-Driven Testing or return to the Customer-Driven Quality page for more practices.

Product Definition for Software Quality

In the Define stage of Customer-Driven Quality, your team identifies what should be built, which seems to be a natural fit for customer-driven practices. The marketing, product management, or product owners usually lead this phase; each of these roles are dedicated to understand customer wants and needs. So, where does the test/quality team play a role?

Product Definition Phase of Software Development

The test/quality team can help the product definition team in two key areas:

  • Make meaningful and mindful investments in three categories: growth of the business, satisfying current customers, and investing in technology for the long term.
  • Test the requirements with customers, before expending the time and effort to build the product.

Investment Level for Customer Quality

One of the most powerful decisions that can be made during the definition phase is deciding how much of the limited development bandwidth to apply towards three broad categories: product infrastructure or technology, satisfying current customers, and building new value for current or future customers.

These questions are not easy to make, but each category must be considered. Often, the product definition phase is dominated by building value for new customers, but product infrastructure and current customer pain points are important as well.

The product infrastructure must be maintained and improved to prevent future pain points, such as performance or availability issues. The quality team can make a difference by measuring the technical debt and communicating with all of the stakeholders, including the product definition team.

It’s important to communicate with business stakeholders in business language. They might not understand or appreciate the lack of test automation coverage, but will appreciate the opportunities to reduce the development cycle-time. Learn to speak the language of your stakeholders.

Measuring the customer feedback mainly drives solving current customer pain points. Satisfaction surveys, Net-Promoter scores, and Voice of the Customer are the measures, and we should dedicate some effort to satisfy current customers (who are likely to be influencing future purchase decisions).

A few ways of managing this prioritizing the current customer activities include:

  • Applying a “tax” of development time before considering new features; keeping some development resources in reserve so they can concentrate on satisfying current customers.
  • Managing requirements and features in a common prioritized backlog along with infrastructure and customer issue. Having a common repository will help the team make these tradeoffs.
  • Periodically, a “customer love” or “net promoter” release can be useful, where the entire focus is just solving pain points.

Testing requirements before building product

Customer-Driven Quality does have a natural predator, the HIPPO. Not the aquatic mammal from Africa, but the “Highly Paid Person’s Opinion”. The HIPPO is a term we use to remind ourselves that it’s the customer’s opinion that counts, not the boss’s.

Hippopotamus the animal. Hippo = Highly Paid Person's Opinion

Brainstorming is also used to bring out the best ideas from the whole team. Too often, the results of the brainstorming sessions are not the best ideas, but driven from the most charismatic participants.

Instead of building the feature set that is desired by the influential people in the organization, the customers could help decide what we build. One set of practices that features customer learning comes from the Lean Startup arena.

The Lean Startup process can be thought of as a product management process, and being tangentially related to “quality.” However, a better way to think about it: testing the requirements before the product is built. This is where the test team can play a strong role.

If you think about the idea for a new product or a new feature as a hypothesis, you can use the scientific method to validate the hypothesis through testing, and using Lean Startup terminology, the testing results in Validated Learning.

Validated Learning is all about achieving results from customer behavior, not just what they say they would do. Too often, when talking with a customer (or potential customers), and asking if they would value a feature, the answer is “yes”. Especially in face-to-face encounters, the customers (who are nice people) agree with the idea. Early surveys show tremendous interest, but eventual sales fail to get even close to those projections. Learning from actual customer behavior is intended to eliminate Heisenberg or Hawthorne effects from the learning.

Illustrating customer-driven testing with an example, suppose the product team believes customers want to import their contact list from their email account.

The HIPPO way: The VP uses outlook for email, so we build a utility to extract contacts from Outlook and an upload service. We build a training video and help guide for customers. This launches in 3 months.

The traditional way: The marketing team sends a survey out asking customers if they want to import their contact list from email, and asks which email service they use.  In one week, the team knows the percentage of customers who say they will import contacts, and a distribution of email services.

The customer-driven way: The team adds a link which says “import contacts from your email”, and a landing page with a drop-down list of email providers. In one day, you have information from customers who actually tried to import their contacts. Of course, if no one ever clicked the link, you learned sometime extremely valuable: customers don’t care about importing contacts. In this case, you saved 3 months of development (and all of the test cases and bugs that come with that much development).

The customer-driven way is more powerful because it provides information on actual customer behavior, not their intent, and not influenced by trying to be nice. It also has the chance to provide results much quicker than a survey, and almost certainly quicker than building the boss’s solution.

In the preceding example, the validated learning came from customers clicking the “import contacts” link, not knowing it was a test.  The relative power of validated learning varies with the level of investment or commitment made by the customer. In our example, the customer invested some time to attempt the import contacts and to select their provider from a drop down. This is a relatively low level of investment. Other examples, in descending order of commitment are:

  1. Pre-purchasing a product or upgrade.
  2. Intending to purchase/upgrade by entering a credit card. Of course, if you don’t have a product to sell yet, don’t actually charge the card. (also called a dry test)
  3. Providing personal information, like email, phone, or login credentials, and signing up to be contacted later. This can come in a message like, “Thanks for your interest, please fill out the form to learn when this is available”
  4. Clicking the link, and filling out a survey.
  5. Clicking a link, then clicking a second “learn more” page.

Using the “dry test” method can get tedious if over-used. Use caution and only use this when you are experimenting for a very large investment.

User Stories not Requirements

In the traditional model of software development, a special team of people, called business analysts or product managers, talk to customers, understand their needs, and document those needs in the formalized language of requirements.

Great care is taken to write the requirements in a manner that facilitates traceability, completeness, and abstracts any hint of implementation. When reading these formal requirements, it’s frequently easy to miss the point completely.

Documenting the product requirements in the form of User Stories helps keep the focus on customers, and removes an opportunity for ambiguity. User Stories can also be read and understood by customers

In the product definition stage, the quality/test team are usually recipients of the definition, not participants in the process. These practices show several ways in which the quality/test team can make a difference in defining the right product. The next post in this series will show several practices for including customers in the design and construction phase.

Visit the main page for more customer-driven quality practices or continue on to learn about building quality into your product.