Monthly Archives: July 2017

Leadership Lessons from the Latest Episode of Game of Thrones

Winter has come, and Game of Thrones fans enjoyed the first episode of season 7. This episode had a couple of good leadership lessons. By the old gods and the new, there are spoilers ahead, so read at your caution.

Jon Snow leading the North in Winterfell

Jon Snow leading the North in Winterfell

Jon Snow is an inclusive leader, which will expand his influence. He embraced the wildlings earlier in the show and most recently the next generation of the Umbers and Karstarks. These moves build his collation, expand his influence, and will aid in the battle with the white walkers.

Most organizations have an “enemy” to help focus their strategy. The most successful organizations focus their efforts outside, towards beating a competitor or changing the status quo in the marketplace. I’ve seen places that concentrate, instead, on infighting between departments. These places don’t exist anymore.

Jon Snow focuses instead on the outside threat, the white walkers, which puts the North in the best position to survive.

Secret Information in the Citadel Library

Secret Information in the Citadel Library

We see Sam Tarley in the Citadel, the greatest library in Westeros. All kinds of information that is vital to the humans is literally locked up in the Citadel. The maesters there are keeping the “memory” alive, but not making that useful information available.

Be transparent with your information, don’t horde it. You never know who needs that data to help them do their job.

Voter Fraud Detection Scheme

Our nation has been built on principles, such as “consent of the governed” – leading to free and fair elections and “freedom from unreasonable search” – which leads to personal privacy.  The right of us Americans to vote is paramount, and we use a secret ballot to choose our leaders.

Recently, a clash between these principles has arisen. Allegations of voter fraud have come up in recent elections, well, allegations of voter fraud probably happen with every election, but it’s been a persistent issue. The federal government recently asked for data from the states about the voter’s and their votes.  Most states are not going to comply, citing voter privacy.

When there is a clash of principles, we should first see if we could find a solution that meets both principles. If that doesn’t work, we should prioritize the principles and decide which one is more important, then apply the greater principle.

I believe that we can protect both principles in this case, free and fair elections and protect voter privacy, by using technology and a simple process.

First, the voter data is kept at all times by the states. The states do not need to provide the detailed records to the federal government. Instead, states will transform their data into a tokenized form – and only provide the tokenized data to the federal government.

This is how most password systems work – the password is not stored in a database, but the password is tokenized and only that encrypted form is stored. To check passwords, the same encryption method is used and compared to the encrypted version. So, the same type of process is used here – voting records are encrypted and compared.

If there is a match, meaning that the same person voted in multiple states, then the governments involved (the federal and each of the states involved) can investigate the matter further.

I created a prototype and published to github.

 

Testers Adding Value to Unit Tests

Development practices like Test-Driven Development can lead to high levels of Unit Test coverage. Sometimes, the team reaches 100% statement coverage.  How can testers add value when the code is “fully covered”?

Example of a project with 100% Unit Test Coverage

Example of a project with 100% Unit Test Coverage

I provide a few examples in this Sticky Minds article:

Review unit tests for missing cases. Just because the code is fully executed doesn’t mean all of the cases are used which might cause the code to fail.

Review unit tests against the product requirements. The unit tests will check that the code works as the developer intended. Sometimes, you can verify that the functionality meets the requirements.

Review the unit tests themselves. Having 100% coverage for unit tests means that the code was executed during a test, not that the tests are good, or even test anything.  Check that the assertions are valid and useful.

Review Exception Handling.  Unit testing is a good way to test the exception handling because the developer has full control of the mocked environment and can simulate the exceptions. These exceptions are difficult to inject in other types of testing, so making sure the unit tests do a good job is important.

Other ideas? Great, please check out the article and leave a comment either here or there.

 

Bugs: To Track or Not To Track

Every now and then, I hear a debate about whether we should track bugs or just focus on fixing them.

One point of view is that tracking bugs is a waste of time.  The focus should just be on fixing the issues, as quickly as they happen.  You don’t build up a large backlog of bugs if you just fix them as they are found. This is argued as a healthy mindset to keep quality high at all times.  Tracking bugs, and keeping a drag of legacy bugs around, is wasted effort because it cost time and money and does nothing itself to improve the customer experience.

On the flip side, tracking bugs is vital. You need to make sure that bugs don’t fall through the cracks, and the bug fixes have proper verification and regression testing, and you have a set of data about bugs to make improvements in the process.

My point of view, anytime there is a debate between doing “this” or “that”, the right answer is usually “both”. There are situations where simplicity and efficiency are most appropriate and situations where tracking and data collection are appropriate.

Testing and review are a feedback loop. Someone creates something and someone evaluates that work and provides feedback. Some of these loops are “inner loops”, where the feedback cycle is very quick and very direct. The “outer loops” are longer, have more people involved.

Test Driven Development illustrates an example of an inner loop, where the cycle is “write a failing test”, “code until the test passes”, then “refactor”. Its clearly inefficient to write bugs for the failing tests, the developer us using this method to develop to the code – he/she doesn’t need to track the (intentional) bugs.

The customer support process is an obvious example of an outer loop. When customers find bugs and report them to us, we should make sure those bugs are addressed with the proper priority and that we do a root cause analysis to learn from the mistakes.

This stylized diagram illustrates the relationship between the TDD inner loop and the customer support outer loop.

Stylized SDLC showing an inner loop of TDD and an outer loop of Customer Support

Stylized SDLC showing an inner loop of TDD and an outer loop of Customer Support

Here are some examples of practices used in development and testing, along with how I generally recommend we track the issues and how we capture the learning from the mistakes. Of course, your mileage may vary based on your industry, product, and any regulatory requirements.

Practices Bug tracking Learning
Inner Loops

·      Personal Code review

·      Peer review/buddy check

·      Unit tests & debugging

·      Failing tests in TDD

·      Parallel testing with a buddy

No formal tracking, just fix the bug Learning and improvement is a personal endeavor
Medium Loops

·      Failures in tests on feature branch (CI, Build verification, etc.)

·      Bugs found inside a sprint – on a story being implemented

·      Non-real-time code review (using a tool, email, etc.)

 

Lightweight tracking. A simple list on a wiki/whiteboard, post-it notes, or some lightweight tracking with just open/closed state. Learning and improvement happen as a team, usually using the sprint retrospective.
Outer Loops

·      Failures in tests on trunk/main branch (CI, Build verification, etc.)

·      Bugs found after a sprint (regression testing, hardening tests)

·      In general, bugs found outside the immediate dev team for those features

·      Customer-reported bugs

·      Bugs found during certification tests

·      Bugs found by outside testers (crowdsource, off-shore, etc.)

 

 

Bug tracking system that has a workflow and meta-data like priority, severity, state, and the normal fields. Capturing RCA information in the tracking system is useful. Learning and improvement is part of the continuous improvement program. Root cause Analysis for the important bugs (customer found, etc.)

These guidelines have been formed by my experiences, and are meant to balance the best quality, continuous learning, team empowerment, and efficiency.