Category Archives: Testing

Book Review, MetaAutomation by Matt Griscom

Book cover image of MetaAutomation  by Matt Griscom

MetaAutomation by Matt Griscom

Writing automated tests is the easy part. We’ve all seen demonstrations where the sales engineer creates a test on the fly, then re-runs that test. Tools make it easy to see early progress.

Then comes the day-in-day-out usage of automated tests. Tests pass, but the product has bugs in those areas. Tests fail, for no apparent reason, then pass the next time. Your “automation person” runs the tests and understands the results, but if they are gone that day, well no automation.   The next product version comes out, your tests are all broken now…

I’ve worked in test automation for many years, on many projects. Each of these projects were in a different stage of maturity for effective automation. Some efforts were no more than having a tool in place, with 1 or 2 people who could create and run the tests.  We waited for the tests to be completed so we could add those results in with the manual results.

Other projects had a lot more infrastructure to make the automated tests valuable to the entire development and test organization.  Over the years, I’ve added elements of this infrastructure to a set of patterns that I would apply to new automation projects – and those patterns did add value to the new projects.  These patterns include standards for version control of the tests, triggers for execution, management of test data & configurations, test execution farms, notifications, results dashboard, and triage helpers.

Along comes MetaAutomation, by Matt Griscom. This book provides a framework which already contains each of those patterns and a few more that were new to me. Matt’s book provides a framework to develop an effective automation program on any software project.

These patterns range across the phases of the development life-cycle for tests. The patterns start with Prioritized Requirements and how these relate to test case creation.

MetaAutomation provides several useful patterns for test cases, including Hierarchical Steps, combined with Atomic Checks allow for reuse of test assets across test cases – and helps isolate failures when they do happen. Several other patterns relate to checking the test results.

The Parallel Run pattern describes how to run many test suites in parallel, perhaps on the cloud, helping to maximize throughput with reduced duration of execution.

Smart Retry and Automated Triage help address the dreaded task of triaging failed tests. Care should be still taken when using these patterns, we don’t want to mask flaky tests by making it easier to deal with false results.

The automated tests generate tons of data. The Queryable Quality pattern shows how the team can create value from these tests.

If you are in test automation, either building individual tests or leading the effort, this book contains lots of lessons learned the hard way.

I’ll wrap up this review with my favorite sentence from this book: “With MetaAutomation, QA helps developers work faster”. The goal of automation is to improve the overall software development process.

(review originally published at Software Leadership Academy)

Testing Practice – Find my Bugs

Some of my favorite experiences with learning about testing is is do exercises where there are known bugs.  Somehow, knowing that the bugs exist, and my challenge is to find them, is energizing.  That is probably a good mindset to have when approaching testing in the first place. There are always bugs…

I created a couple of modules, called bugPractice, with intentional bugs for you to practice your skills.  There are likely more bugs than what I purposely put in, maybe you will find those as well.

The first module is a classic testing interview question, test a palindrome checker.  The function is called is_palindrome and it takes a single string.  It returns True if that string is a palindrome. Otherwise, it returns False.    I put in 5 bugs. If you find all 5, congratulations. If you find more, well, shame on me.  Here is the happy path execution for is_palindrome:

Transcript of happy path execution of is_palindrome()

Transcript of happy path execution of is_palindrome()

The second module is an implementation of a very simple stack data structure.  A stack simulates a stack in the real world, you can add items on top of the stack (push action), or pull items off the top of the stack (a pop).  Here is the transcript for the stack happy path:

Transcript of happy path execution for a stack class.

Transcript of happy path execution for a stack class.

These use Python, my favorite language.  If you need to brush up on your Python, I recommend a couple of resources. First, a software testing class from Udacity uses Python to teach testing fundamentals.  Second, Codecedemy’s class on Python teaches the fundementals of the language.

The most simple way to get started, download the archive, unzip it, then invoke Python interactive from that folder and follow the transcripts above to get started.

Testers Adding Value to Unit Tests

Development practices like Test-Driven Development can lead to high levels of Unit Test coverage. Sometimes, the team reaches 100% statement coverage.  How can testers add value when the code is “fully covered”?

Example of a project with 100% Unit Test Coverage

Example of a project with 100% Unit Test Coverage

I provide a few examples in this Sticky Minds article:

Review unit tests for missing cases. Just because the code is fully executed doesn’t mean all of the cases are used which might cause the code to fail.

Review unit tests against the product requirements. The unit tests will check that the code works as the developer intended. Sometimes, you can verify that the functionality meets the requirements.

Review the unit tests themselves. Having 100% coverage for unit tests means that the code was executed during a test, not that the tests are good, or even test anything.  Check that the assertions are valid and useful.

Review Exception Handling.  Unit testing is a good way to test the exception handling because the developer has full control of the mocked environment and can simulate the exceptions. These exceptions are difficult to inject in other types of testing, so making sure the unit tests do a good job is important.

Other ideas? Great, please check out the article and leave a comment either here or there.

 

Testing Pi day, which is more accurate?

In the US, today is Pi day.  March 14, or as we write it with numbers 3.14.  Happy Pi day everyone!

However, in Europe, they tend to write out dates starting with the day, then the month.  So, today is 14.3 over there.  No-where near Pi.  Instead, the closest to Pi day in Europe would be 22/7 (July 22nd), where 22/7 is a common approximation of Pi.

Which is more accurate?

Testing both approximations is pretty easy with Wolfram Alpha. The error in the approximation is determined by taking the absolute value of the difference between Pi and the approximation. So, the following screen shows the result, asking if the error in the US version is greater than the European version of Pi day:

Comparing the US version of Pi day (3.14) to the European version (22/7) with Wolfram Alpha

Comparing the US version of Pi day (3.14) to the European version (22/7) with Wolfram Alpha

Europe wins this time. 22/7 is a better approximation than 3.14.