Customer-Driven Quality Recap
Customer-Driven Quality is a framework for building software that is loved by your customers. It starts with the premise that customers define quality and every phase of the software development life-cycle should include engagement from actual customers. This post in one a series and describes some of the tools and infrastructure that most likely already exists in software development organizations, which are necessary to implement a Customer-Driven Quality program.
Customer Care Organization
I created this framework in a consumer-facing software company. The Customer Care organization is a huge investment intended for helping customers, but should also be used as a valuable source of feedback to the software development team. This function may be called customer support or tech support in your company. Alternatively, you may not have an organization, but online support forums. Some business-to-business organizations don’t have a support organization but perhaps this function is handled by “sales engineers”.
Whatever the form of customer support, it’s extremely important to build relationships with the care organization and have a deep engagement with them. There is a wealth of information about quality (well, usually quality misses), in these organizations. Invest time and effort to learn how your company supports customers, and build a network of contacts within the support team.
Besides the customer care channel, it’s very useful to get direct feedback from customers in the context of using our products. We’ve build a feedback widget that can be placed on any of our pages.
The feedback widget allows our customers to give the feature a rating, from 1 to 5 stars, and to type in some comments, and optionally contact information if they are willing to have a conversation. The product managers, developers and testers who created the feature monitor the feedback. This has been an extremely useful mechanism to learn what is working, and find opportunities to improve the feature. One example, to illustrate how valuable the feedback mechanism can be, the feature development team is not “done” with the feature until they have achieved at least a 4-star rating.
Here is some example feedback, where customers provide valuable insights in how we can make the feature even more useful.
Services such as uservoice.com provide this type of functionality for relatively low investment. This particular example has a benefit in that customer can vote for the most popular feedback. Customers define what they want, and also provide a priority for the team.
Much of the feedback that we receive is in text form. The feedback widgets, support emails, support chat transcripts, and call summaries all come in plain text. While useful to read the raw input to build empathy, it’s also important to aggregate the raw data to gain insights on the most important issues faced by your customers.
At my current company, we are lucky to have many customers, so we get a lot of feedback. Automating the analysis of feedback is efficient and gives an opportunity to analyze it more frequently.
A tool that we’ve found to be very useful is Sentiment Analysis. Sentiment analysis is a semi-automated process that measures both the frequency of occurrence of each issue type, but also measures the emotional intensity.
Text frequency analysis allows us to count the number of times our customers have a particular type of problem (i.e. how many calls are about login). Where sentiment analysis shines is adding customers intensity. For example, “I really hate your login” is generally worse than “Your login is difficult.”
Tools are available to help with this analysis, for example Clarabridge Analyze. Homebrewing your own solution is sometimes the right answer as well. I’m a fan of Python and NLTK for these tasks. One of my previous posts described several tools for analyzing customer sentiment on social networks.
Our software has many monitoring mechanisms built in to make sure its operating properly and to alert the support team in the event things start to go wrong. There are action logs, exception logs, and performance monitors. Each of these logs also can provide some insights into actual customer behavior and experience.
The quality teams should learn what logs are available, how to get access, and how to analyze these logs. One example where we used the log files, we pulled the action logs across 30 days of use, and identified the top 10 transactions. This information helped us to prioritize the test automation program.
Successful web apps likely are monitored by tools like Google Analytics or Site Catalyst. Sometimes, this information is managed by the marketing team. Build a relationship with the marketing team and use this data to understand how your customers are using the information.
Customer Satisfaction Surveys
Your marketing or customer support team likely survey’s your customers to understand their satisfaction level. A very popular method is to use the Net Promoter survey. Your development and quality teams should read these survey results and the verbatim comments from your customers.
In case your organization does not survey customers, having the capability to do so is very useful. I’ve had success with SurveyMonkey, though there are many alternative tools available. Even if your organization regularly surveys your customers, it still may be useful to have this ability to ask targeted questions.
This post listed a variety of tools that are used for implementing Customer-Driven Quality. The next post will describe ways to engage directly with customers (cutting out the middle-persons). Please leave a comment if you have any questions or your are gaining value from these posts. Return to the main page for Customer-Driven Software Quality.