Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

% of passed test cases, and their basis (states)


#21

Hi,
I really like this idea. Could you add another vote to this topic? :slight_smile:
Cheers
Wojtek


#22

Thanks for your feedback, Wojtek, added the vote :slight_smile:

Cheers,
Tobias


#23

Please add my vote to include this feature.
As others have stated, it would be nice to exclude those test cases with a status of “Not Applicable” from the pie chart. It would also be nice to include “Untested” in the color coded legend of the pie chart. Thanks for the great tool.


#24

Hi!

Thanks for your posting. We recommend excluding those tests that don’t apply from the test run and you can simply change the case selection when editing a test run/plan. But I’m happy to add another vote to this feature request and this certainly makes sense in some scenarios, thanks for your feedback!

Cheers,
Tobias


#25

Hello,

I would like to add a vote to both of these suggestions (customizing which statuses are calculated in the Pass %, and only displaying statuses that have a value).

As others have said, we have some statuses that we either don’t want to be counted against the “Not Passed” number, and we have some statuses that we’d like included in the “Pass” number but have different names for tracking purposes.

The workaround Tobias suggested is an option that we use, but being able to customize those reports would get us more accurate data, and it would be preferable in the long run. Additionally, that workaround doesn’t help when looking at milestones. If we separate out our “non-pass” tests into their own run, those runs are still calculated in the milestones overall pass rate (which, again, we would find to be inaccurate).

Thanks!


Reporting execution percentage rather than passed percentage
#26

Thanks for your feedback, Ryan, that’s appreciated! Happy to add another vote to this feature request and, despite the age of this thread, it’s definitely still planned to look into this.

Cheers,
Tobias


#27

+1 on the OP’s suggestion, my team also has the same pain where all test cases “skipped” was factored into the pass rate and shows a much lower quality bar than it really was. This prevents my team from showing TestRail page on our status Kiosk and have to write a new tool just to show the real pass rate!


#28

Hi Jeff,

Thanks for your feedback! We would usually recommend removing those tests from runs (by updating the case selection) and this would also change the statistics and overall pass rate.

Cheers,
Tobias


#29

Add another vote for me please if this is not already in the works.
Counting the n/a tests against the pass rage of a suite or run is misleading when reviewing results.


#30

Your vote has been added


#31

Please add another vote for me.

The workaround (removing those test cases from the run) does not work for us because we then lose an audit trail. When we descope a test case, we document why in the test case. Removing it from the run removes our ability to do that.

Much appreciated.


#32

This seems popular and 4 years old. Any updates? +1 from me as well.


#33

Yes, could we have an update on this feature?

The following workflow can not be achieved without it:

  • We have a test plan which is “really” executed only one time, testing a lot of different features
  • We have different customer using a different set of these features. Some of these features are shared by multiple customers, some others are not.
  • We want to be able to create relevant reports for each of the customer

Knowing that, here are the possibilities:

  • If we generate reports containing only filtered test cases for each customer, the pie charts are not correct => not an option
  • If we generate reports without any filtering, every customer get all the results of the test plan, even about things not relevant for them or confusing => not an option
  • If we create one test plan per customer, we report and save several time the same result in TestRail database, at least for the shared features. The same test, executed only one time, will have its results pasted multiple times in different Testrail test plans. It’s time consuming, source of errors, and not efficient from storage and database point of view. => not an option

How should we deal with this scenario?