Category Archives: Testing

That would never happen in real life

Earthrise by William Andres, Apollo 8 Astronaut.

Earthrise by William Andres, Apollo 8 Astronaut.

Houston, we have a problem.  This line was made famous by Tom Hanks portraying Jim Lovell in Apollo 13.  But, this was not the first time that Jim Lovell reported an inflight problem to Houston.

Before Apollo 13, Jim Lovell was the Command Module Pilot for Apollo 8, the mission flew around the moon for the first time in December 1968.  On the way back from the moon, the guidance computer suddenly reset itself to a location on the launch pad.  His call to Houston was less dramatic, “For some reason, we suddenly got a Program 01 and no attitude light on our computer.”

He meant to initiate star alignment with star 01, but accidentally entered the command P01, which reset the position back to the mission origin.  P01 was never intended to be used while in flight.  No harm came from this error, the transcripts show that the procedure to correct it came 10 minutes later in the mission.  Good thing they were on day 5 of the 6-day mission.

Margaret Hamilton with Apollo 11 source code

Margaret Hamilton with Apollo 11 source code

What is interesting to me, this issue was not a new one.  Margaret Hamilton actually found it during a testing session on a Saturday when she brought her daughter, Lauren, into work.  Lauren was playing with the computer simulator and entered that same command. Margaret raised the issue but the change board declined to authorize a fix because the astronauts are very well trained and would not make that same mistake.

How many times has this happened in bug reviews?

PS: Margaret fixed the issue before Apollo 11.

Want to try it yourself? You can geek out with a simulator of the guidance computer.

Book Review, MetaAutomation by Matt Griscom

Book cover image of MetaAutomation  by Matt Griscom

MetaAutomation by Matt Griscom

Writing automated tests is the easy part. We’ve all seen demonstrations where the sales engineer creates a test on the fly, then re-runs that test. Tools make it easy to see early progress.

Then comes the day-in-day-out usage of automated tests. Tests pass, but the product has bugs in those areas. Tests fail, for no apparent reason, then pass the next time. Your “automation person” runs the tests and understands the results, but if they are gone that day, well no automation.   The next product version comes out, your tests are all broken now…

I’ve worked in test automation for many years, on many projects. Each of these projects were in a different stage of maturity for effective automation. Some efforts were no more than having a tool in place, with 1 or 2 people who could create and run the tests.  We waited for the tests to be completed so we could add those results in with the manual results.

Other projects had a lot more infrastructure to make the automated tests valuable to the entire development and test organization.  Over the years, I’ve added elements of this infrastructure to a set of patterns that I would apply to new automation projects – and those patterns did add value to the new projects.  These patterns include standards for version control of the tests, triggers for execution, management of test data & configurations, test execution farms, notifications, results dashboard, and triage helpers.

Along comes MetaAutomation, by Matt Griscom. This book provides a framework which already contains each of those patterns and a few more that were new to me. Matt’s book provides a framework to develop an effective automation program on any software project.

These patterns range across the phases of the development life-cycle for tests. The patterns start with Prioritized Requirements and how these relate to test case creation.

MetaAutomation provides several useful patterns for test cases, including Hierarchical Steps, combined with Atomic Checks allow for reuse of test assets across test cases – and helps isolate failures when they do happen. Several other patterns relate to checking the test results.

The Parallel Run pattern describes how to run many test suites in parallel, perhaps on the cloud, helping to maximize throughput with reduced duration of execution.

Smart Retry and Automated Triage help address the dreaded task of triaging failed tests. Care should be still taken when using these patterns, we don’t want to mask flaky tests by making it easier to deal with false results.

The automated tests generate tons of data. The Queryable Quality pattern shows how the team can create value from these tests.

If you are in test automation, either building individual tests or leading the effort, this book contains lots of lessons learned the hard way.

I’ll wrap up this review with my favorite sentence from this book: “With MetaAutomation, QA helps developers work faster”. The goal of automation is to improve the overall software development process.

(review originally published at Software Leadership Academy)

Testing Practice – Find my Bugs

Some of my favorite experiences with learning about testing is is do exercises where there are known bugs.  Somehow, knowing that the bugs exist, and my challenge is to find them, is energizing.  That is probably a good mindset to have when approaching testing in the first place. There are always bugs…

I created a couple of modules, called bugPractice, with intentional bugs for you to practice your skills.  There are likely more bugs than what I purposely put in, maybe you will find those as well.

The first module is a classic testing interview question, test a palindrome checker.  The function is called is_palindrome and it takes a single string.  It returns True if that string is a palindrome. Otherwise, it returns False.    I put in 5 bugs. If you find all 5, congratulations. If you find more, well, shame on me.  Here is the happy path execution for is_palindrome:

Transcript of happy path execution of is_palindrome()

Transcript of happy path execution of is_palindrome()

The second module is an implementation of a very simple stack data structure.  A stack simulates a stack in the real world, you can add items on top of the stack (push action), or pull items off the top of the stack (a pop).  Here is the transcript for the stack happy path:

Transcript of happy path execution for a stack class.

Transcript of happy path execution for a stack class.

These use Python, my favorite language.  If you need to brush up on your Python, I recommend a couple of resources. First, a software testing class from Udacity uses Python to teach testing fundamentals.  Second, Codecedemy’s class on Python teaches the fundementals of the language.

The most simple way to get started, download the archive, unzip it, then invoke Python interactive from that folder and follow the transcripts above to get started.

Testers Adding Value to Unit Tests

Development practices like Test-Driven Development can lead to high levels of Unit Test coverage. Sometimes, the team reaches 100% statement coverage.  How can testers add value when the code is “fully covered”?

Example of a project with 100% Unit Test Coverage

Example of a project with 100% Unit Test Coverage

I provide a few examples in this Sticky Minds article:

Review unit tests for missing cases. Just because the code is fully executed doesn’t mean all of the cases are used which might cause the code to fail.

Review unit tests against the product requirements. The unit tests will check that the code works as the developer intended. Sometimes, you can verify that the functionality meets the requirements.

Review the unit tests themselves. Having 100% coverage for unit tests means that the code was executed during a test, not that the tests are good, or even test anything.  Check that the assertions are valid and useful.

Review Exception Handling.  Unit testing is a good way to test the exception handling because the developer has full control of the mocked environment and can simulate the exceptions. These exceptions are difficult to inject in other types of testing, so making sure the unit tests do a good job is important.

Other ideas? Great, please check out the article and leave a comment either here or there.