Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Have I been using testrail's runs wrong all this time? (test coverage discussion)


#1

Hi there, right now we have baselines, so we have a master suite that we maintain. When we want to do an execution, we create a test plan, create our suites, and cherry pick our test cases and run against them.

I’m wondering if I am missing out on important functionality.

What is typically the best practice when selecting suites? In other test case management tools, they allow you to group test cases together in suites, ex: by feature, assign supported configurations against them and then run them. I’ve been just doing this on the fly. Is there a practice where I should have a test plan for each module that has a suite per feature that defines my configurations and then I just re-run that suite when I want to execute? Basically I’m trying to help calculate my coverage through configurations.

Let me explain:

Say have 2 browsers we support and 1 OS, so:

Windows – chrome
Windows – firefox

Say we have 10 test cases, so with that config set, we now have up to 20 runs

Let’s say 5 of those tests are not applicable to firefox

So now we have up to 15 run possibilities

So I have those 15 and I want to compare that with what I actually ran, lets say we ran all possible tests except for 1, so that would give me a total coverage of 14/15 and I’d see that broken down per platform (windows – chrome, windows – firefox) and in that report I’d see exactly which test wasn’t executed as well.

Now let’s look at the tool, testrail has no way to define those environments on the test case level, that is done in the test plan/suite. So I add them to the suite and then assign my configurations. Then for each configuration I can go and assign the specific test cases that applies. The problem is, this is a maintenance nightmare and isn’t really practical.

So I’m curious what your solution is to the metric I’m trying to get

In addition, I’d like to know the following, based on your answer

  • What if a new test case is added, how are the assignments updated without maintenance issues?
  • What if a new configuration is added, how are the configurations updated without maintenance issues?

Now I wanted to give the most detailed accurate metric possible. In a real world scenario, this is pure overkill and might be impossible, but I wanted to see what the options are. Our real world scenario is we have a ton of platforms we support with very specific versions (ex: nsx, esx, aws, azure, pretty much all linux distros, all windows, etc…) so that’s why it’s important to us to track our coverage. The likely option we’ll go with is that we’ll just pull the execution results from the test case management tool, extract it, compare it against our matrix of supported platforms and calculate the coverage that way + do some cherry picking.

Thoughts?

Also, side question. Is there a way to make a report that says “for these test cases in section A + all sub sections, give me their test results, ordered by milestone up to the latest” I think not, I can only see that i can filter them, but section isn’t a filter option so I need to use reference (for example)… or use a custom report. Any ideas?

Thank you!


#2

I also face similar problems, and I am curious how others solve this.


#3

Same :slight_smile: I think what most tools do is that they seem to allow you to have a set of platform configs against each suite… so feature X we know runs against ABC configs, feature Y runs against ABCDEF and then you calculate coverage that way, and you add test cases to the suite as you go, rather than doing it at an individual test case level, which is a maintenance nightmare.

Testrail offers configurations and ‘suites’ in the test run, but these are really tied to the execution process rather than being standalone. So that’s why my question came up about best practice, ex: do other companies have these ‘do not delete me’ test plans where they make these suites for each of their feature and then just re-run that test plan when they want to execute a suite.