Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

How to view "defect rate"?


#1

Given failedCases / executedCases = defectRate, do any of you use this metric as a measure of quality? If so, has anyone found a way to display the cumulative, project-wide defect rate in TestRail?

What I mean is… on any given day… I see that TestRail has a chart that shows me what percentage of executed tests are in the failed status right now. Once all defects have been corrected and all tests have subsequently been executed and passed, however, there is no longer any evidence of defect rate. TestRail simply displays “100% passed”.

Clearly, before I reached “100% passed”, I executed N test cases, and may potentially have re-executed subsets of those N tests multiple times as fixes were attempted, but failed and perhaps fixed again.

So if I wanted to show the total defect rate of the project or release (Test Plan)… including all executions + failures + re-executions… how do I do this in TestRail?

Is anyone else interested in this metric?


#2

Hi!

Thanks for your posting. You can use the Defects tabs to help with this (added with TR 5.0):

This is available for milestones, plans & runs and this includes the amount of logged defects, executed tests and added results. Other useful reports include the Summary > Defects report from the Reports tab and they also support a more flexible run selection (e.g. across the entire project).

I hope this helps!

Cheers,
Tobias


#3

What I’m looking for is an automatic calculation of total test executions failed / total test case executions for a given test plan and/or milestone. Key point being executions, not test cases. This represents what percentage of my test case executions have failed during the entire project/release.

It’s a metric that helps people understand how much extra work the QA team had to do as a consequence of defects in the release (i.e. we have to re-test all those failed cases, maybe more than once if the fix isn’t good, which consumes more testing time than we had planned). We provide our testing duration estimates with an assumed defect rate. If the actual defect rate exceeds our assumption, it’s grounds for extending the test schedule.

The defect count, while valuable in its own way, is misleading, because it doesn’t show how many test case executions were done (and potentially re-done) as a result of the defects.

Is there some other more elegant way to achieve this?


#4

Thanks for your feedback! The Defects tabs would include these numbers as well. Depending on how you organize your testing, you would either look at the number of tests or the number of added results. Using the amount of tests makes sense if you start new test runs (“rerun”) to retest logged defects and the amount of results is a better fit if you retest the tests in the same test run (e.g. with the typical status order Failed -> Retest -> Passed). Both numbers + the number of defects can be found on the Defects tab as part of the chart. In most cases, the amount of results is the more accurate number and would be the recommended metric to calculate a defect rate.

Cheers,
Tobias