Every now and then, I hear a debate about whether we should track bugs or just focus on fixing them.
One point of view is that tracking bugs is a waste of time. The focus should just be on fixing the issues, as quickly as they happen. You don’t build up a large backlog of bugs if you just fix them as they are found. This is argued as a healthy mindset to keep quality high at all times. Tracking bugs, and keeping a drag of legacy bugs around, is wasted effort because it cost time and money and does nothing itself to improve the customer experience.
On the flip side, tracking bugs is vital. You need to make sure that bugs don’t fall through the cracks, and the bug fixes have proper verification and regression testing, and you have a set of data about bugs to make improvements in the process.
My point of view, anytime there is a debate between doing “this” or “that”, the right answer is usually “both”. There are situations where simplicity and efficiency are most appropriate and situations where tracking and data collection are appropriate.
Testing and review are a feedback loop. Someone creates something and someone evaluates that work and provides feedback. Some of these loops are “inner loops”, where the feedback cycle is very quick and very direct. The “outer loops” are longer, have more people involved.
Test Driven Development illustrates an example of an inner loop, where the cycle is “write a failing test”, “code until the test passes”, then “refactor”. Its clearly inefficient to write bugs for the failing tests, the developer us using this method to develop to the code – he/she doesn’t need to track the (intentional) bugs.
The customer support process is an obvious example of an outer loop. When customers find bugs and report them to us, we should make sure those bugs are addressed with the proper priority and that we do a root cause analysis to learn from the mistakes.
This stylized diagram illustrates the relationship between the TDD inner loop and the customer support outer loop.
Stylized SDLC showing an inner loop of TDD and an outer loop of Customer Support
Here are some examples of practices used in development and testing, along with how I generally recommend we track the issues and how we capture the learning from the mistakes. Of course, your mileage may vary based on your industry, product, and any regulatory requirements.
· Personal Code review
· Peer review/buddy check
· Unit tests & debugging
· Failing tests in TDD
· Parallel testing with a buddy
|No formal tracking, just fix the bug
||Learning and improvement is a personal endeavor
· Failures in tests on feature branch (CI, Build verification, etc.)
· Bugs found inside a sprint – on a story being implemented
· Non-real-time code review (using a tool, email, etc.)
|Lightweight tracking. A simple list on a wiki/whiteboard, post-it notes, or some lightweight tracking with just open/closed state.
||Learning and improvement happen as a team, usually using the sprint retrospective.
· Failures in tests on trunk/main branch (CI, Build verification, etc.)
· Bugs found after a sprint (regression testing, hardening tests)
· In general, bugs found outside the immediate dev team for those features
· Customer-reported bugs
· Bugs found during certification tests
· Bugs found by outside testers (crowdsource, off-shore, etc.)
|Bug tracking system that has a workflow and meta-data like priority, severity, state, and the normal fields. Capturing RCA information in the tracking system is useful.
||Learning and improvement is part of the continuous improvement program. Root cause Analysis for the important bugs (customer found, etc.)
These guidelines have been formed by my experiences, and are meant to balance the best quality, continuous learning, team empowerment, and efficiency.