How we’re mastering test management

IAS Tech Blog
4 min readOct 18, 2018

by Reeve Philip, QA Lead, Integral Ad Science

This piece was originally published on the IAS Insider

For a lot of developers testing can be seen as a black box, something to be ignored until problems arise. For too many, the time to think about testing is the first time an exasperated manager asks, “wasn’t this tested?” But quality isn’t just a problem for the QA team. It’s the responsibility of everyone involved in a project to ensure that the product that ships is as free of flaws and bugs as possible. The more people who participate in the review process the more valuable feedback we receive on scenarios that should be tested and functional areas that are no longer valid.

The purpose of this article is to explore the Test Management process and to help you decide on a tool to help facilitate it. First, we’ll explore some of the common methods used to manage testing. Then we’ll lay out the test management process that the IAS team has found most effective.

  1. The human touch

All too often teams look to their most experienced members, the people who know the ins and outs of a particular project, to catch any major quality problems. We trust these subject matter experts to execute whatever ad-hoc testing plan they think is appropriate. This could include a mix of manual and automated tests, but the driving force behind this process is the expert’s experience rather than any sort of repeatable set of values or standards. Relying on experience might seem like a safe bet if you want to move quickly, but without a way to systematically capture and share this knowledge such a system leaves you highly vulnerable to breakdowns due to an unexpected departure, an extended vacation, or plain old human error.

  1. The Post-It Note Approach

In some cases, teams might rely on “Post-Its,” digital or even physical reminders detailing the things that need to be done to make a product ready for the customer. The danger with this approach is that with a large fast-moving team there is a high likelihood that these documents may not be kept up to date or reflect the latest changes. Let alone shared with others. If these digital Post-Its are stored on local machines then all of the test details could be lost should something happen to that machine. A shareable document could solve this problem, which still leaves the problem of maintaining an up-to-date record that accounts for test-runs per milestone. Ultimately this method still falls short on capturing all test scenarios in an accurate, readable, and repeatable way.

Our Solution

To crack this problem, we evaluated a number of test management tools before we ultimately settled on TestRail. TestRail has REST API integration through which we can create our automation tests and add annotations in such a way that it automatically creates the test cases in TestRail when the pull request in Github is merged. Another key point is that we have the ability to use the TestRail API to auto-generated reports from test runs, which can prove useful when sharing with others.

Now, of course, a tool by itself won’t be a silver bullet to fix test management. It helps to organize what you put in. I have seen cases where tests were written in a BDD format (i.e. GIVEN — WHEN — THEN). The problem of simply writing tests in any BDD framework is that it is text that is written independently from the code. This needs to be maintained or else it can quickly go out of date.

First, we have manually added annotations to indicate what each step does and that has been successfully added into TestRail with API integration. Second, we plan to focus on the Python component tests that are written with a highly readable set of clauses. We can mark lines in our DSL as @documented and what this does is build a line-by-line description of the test as each DSL function is executed.

At the moment we are trying to build a POC that builds the test description automatically but it is not plugged into the TestRail API yet. It can also take any parameters passed to a DSL function via parameterized test annotations and generate a specific test description for each. But at the end of the day, it’s not dissimilar to other BDD frameworks in that you get a humanly readable report as output.

We are very close to a streamlined test management process that doesn’t rely on individual team members or a massive running list of tasks and results. Using TestRail we are creating a process that allows the entire team to contribute to the overall quality of the product while cutting down on human error and without adding a ton of additional tracking work to anyone’s plate.

--

--