Category Archives: Software Leadership

Book Review, MetaAutomation by Matt Griscom

Book cover image of MetaAutomation  by Matt Griscom

MetaAutomation by Matt Griscom

Writing automated tests is the easy part. We’ve all seen demonstrations where the sales engineer creates a test on the fly, then re-runs that test. Tools make it easy to see early progress.

Then comes the day-in-day-out usage of automated tests. Tests pass, but the product has bugs in those areas. Tests fail, for no apparent reason, then pass the next time. Your “automation person” runs the tests and understands the results, but if they are gone that day, well no automation.   The next product version comes out, your tests are all broken now…

I’ve worked in test automation for many years, on many projects. Each of these projects were in a different stage of maturity for effective automation. Some efforts were no more than having a tool in place, with 1 or 2 people who could create and run the tests.  We waited for the tests to be completed so we could add those results in with the manual results.

Other projects had a lot more infrastructure to make the automated tests valuable to the entire development and test organization.  Over the years, I’ve added elements of this infrastructure to a set of patterns that I would apply to new automation projects – and those patterns did add value to the new projects.  These patterns include standards for version control of the tests, triggers for execution, management of test data & configurations, test execution farms, notifications, results dashboard, and triage helpers.

Along comes MetaAutomation, by Matt Griscom. This book provides a framework which already contains each of those patterns and a few more that were new to me. Matt’s book provides a framework to develop an effective automation program on any software project.

These patterns range across the phases of the development life-cycle for tests. The patterns start with Prioritized Requirements and how these relate to test case creation.

MetaAutomation provides several useful patterns for test cases, including Hierarchical Steps, combined with Atomic Checks allow for reuse of test assets across test cases – and helps isolate failures when they do happen. Several other patterns relate to checking the test results.

The Parallel Run pattern describes how to run many test suites in parallel, perhaps on the cloud, helping to maximize throughput with reduced duration of execution.

Smart Retry and Automated Triage help address the dreaded task of triaging failed tests. Care should be still taken when using these patterns, we don’t want to mask flaky tests by making it easier to deal with false results.

The automated tests generate tons of data. The Queryable Quality pattern shows how the team can create value from these tests.

If you are in test automation, either building individual tests or leading the effort, this book contains lots of lessons learned the hard way.

I’ll wrap up this review with my favorite sentence from this book: “With MetaAutomation, QA helps developers work faster”. The goal of automation is to improve the overall software development process.

(review originally published at Software Leadership Academy)

A conversation with Ananya Bhaduri, Quality Engineering Leader

Ananya Bhaduri is a Quality Engineering leader for SAP-Conur. I recently had the opportunity to talk with her about her leadership style.  She shares some great tips about how to give meaningful recognition to people who deserve it, and how quality engineers can build their influence (especially in Scrum ceremonies).  She also gives some insight into how she approaches interviewing candidates (especially, the dreaded coding challenge).

Check out the interview

 

Voter Fraud Detection Scheme

Our nation has been built on principles, such as “consent of the governed” – leading to free and fair elections and “freedom from unreasonable search” – which leads to personal privacy.  The right of us Americans to vote is paramount, and we use a secret ballot to choose our leaders.

Recently, a clash between these principles has arisen. Allegations of voter fraud have come up in recent elections, well, allegations of voter fraud probably happen with every election, but it’s been a persistent issue. The federal government recently asked for data from the states about the voter’s and their votes.  Most states are not going to comply, citing voter privacy.

When there is a clash of principles, we should first see if we could find a solution that meets both principles. If that doesn’t work, we should prioritize the principles and decide which one is more important, then apply the greater principle.

I believe that we can protect both principles in this case, free and fair elections and protect voter privacy, by using technology and a simple process.

First, the voter data is kept at all times by the states. The states do not need to provide the detailed records to the federal government. Instead, states will transform their data into a tokenized form – and only provide the tokenized data to the federal government.

This is how most password systems work – the password is not stored in a database, but the password is tokenized and only that encrypted form is stored. To check passwords, the same encryption method is used and compared to the encrypted version. So, the same type of process is used here – voting records are encrypted and compared.

If there is a match, meaning that the same person voted in multiple states, then the governments involved (the federal and each of the states involved) can investigate the matter further.

I created a prototype and published to github.

 

Bugs: To Track or Not To Track

Every now and then, I hear a debate about whether we should track bugs or just focus on fixing them.

One point of view is that tracking bugs is a waste of time.  The focus should just be on fixing the issues, as quickly as they happen.  You don’t build up a large backlog of bugs if you just fix them as they are found. This is argued as a healthy mindset to keep quality high at all times.  Tracking bugs, and keeping a drag of legacy bugs around, is wasted effort because it cost time and money and does nothing itself to improve the customer experience.

On the flip side, tracking bugs is vital. You need to make sure that bugs don’t fall through the cracks, and the bug fixes have proper verification and regression testing, and you have a set of data about bugs to make improvements in the process.

My point of view, anytime there is a debate between doing “this” or “that”, the right answer is usually “both”. There are situations where simplicity and efficiency are most appropriate and situations where tracking and data collection are appropriate.

Testing and review are a feedback loop. Someone creates something and someone evaluates that work and provides feedback. Some of these loops are “inner loops”, where the feedback cycle is very quick and very direct. The “outer loops” are longer, have more people involved.

Test Driven Development illustrates an example of an inner loop, where the cycle is “write a failing test”, “code until the test passes”, then “refactor”. Its clearly inefficient to write bugs for the failing tests, the developer us using this method to develop to the code – he/she doesn’t need to track the (intentional) bugs.

The customer support process is an obvious example of an outer loop. When customers find bugs and report them to us, we should make sure those bugs are addressed with the proper priority and that we do a root cause analysis to learn from the mistakes.

This stylized diagram illustrates the relationship between the TDD inner loop and the customer support outer loop.

Stylized SDLC showing an inner loop of TDD and an outer loop of Customer Support

Stylized SDLC showing an inner loop of TDD and an outer loop of Customer Support

Here are some examples of practices used in development and testing, along with how I generally recommend we track the issues and how we capture the learning from the mistakes. Of course, your mileage may vary based on your industry, product, and any regulatory requirements.

Practices Bug tracking Learning
Inner Loops

·      Personal Code review

·      Peer review/buddy check

·      Unit tests & debugging

·      Failing tests in TDD

·      Parallel testing with a buddy

No formal tracking, just fix the bug Learning and improvement is a personal endeavor
Medium Loops

·      Failures in tests on feature branch (CI, Build verification, etc.)

·      Bugs found inside a sprint – on a story being implemented

·      Non-real-time code review (using a tool, email, etc.)

 

Lightweight tracking. A simple list on a wiki/whiteboard, post-it notes, or some lightweight tracking with just open/closed state. Learning and improvement happen as a team, usually using the sprint retrospective.
Outer Loops

·      Failures in tests on trunk/main branch (CI, Build verification, etc.)

·      Bugs found after a sprint (regression testing, hardening tests)

·      In general, bugs found outside the immediate dev team for those features

·      Customer-reported bugs

·      Bugs found during certification tests

·      Bugs found by outside testers (crowdsource, off-shore, etc.)

 

 

Bug tracking system that has a workflow and meta-data like priority, severity, state, and the normal fields. Capturing RCA information in the tracking system is useful. Learning and improvement is part of the continuous improvement program. Root cause Analysis for the important bugs (customer found, etc.)

These guidelines have been formed by my experiences, and are meant to balance the best quality, continuous learning, team empowerment, and efficiency.