Hi, I think update on test run result pie chart would be great. Is it possible to show only the statuses the team had used during execution other than showing all the statuses (I cannot see the value of displaying all the statuses which haven’t been used in a test run). I am more interested to know all the statuses the test are in at a given time.
Thanks for your feedback. That’s currently not possible but we are happy to consider this for a future version (and I’ve just added this to our feature request list). We haven’t received this request before but this definitely makes sense to keep the stats easier and fast to read, thanks again!
i think thats a nice idea - it supports also the view in the piechart.
So add a vote from me
Please add another vote to this topic.
At least it would be great to exclude the N.A. or Skipped tests in the reporting.
Added, thanks Benjamin
I really like this idea. Could you add another vote to this topic?
Thanks for your feedback, Wojtek, added the vote
Please add my vote to include this feature.
As others have stated, it would be nice to exclude those test cases with a status of “Not Applicable” from the pie chart. It would also be nice to include “Untested” in the color coded legend of the pie chart. Thanks for the great tool.
Thanks for your posting. We recommend excluding those tests that don’t apply from the test run and you can simply change the case selection when editing a test run/plan. But I’m happy to add another vote to this feature request and this certainly makes sense in some scenarios, thanks for your feedback!
I would like to add a vote to both of these suggestions (customizing which statuses are calculated in the Pass %, and only displaying statuses that have a value).
As others have said, we have some statuses that we either don’t want to be counted against the “Not Passed” number, and we have some statuses that we’d like included in the “Pass” number but have different names for tracking purposes.
The workaround Tobias suggested is an option that we use, but being able to customize those reports would get us more accurate data, and it would be preferable in the long run. Additionally, that workaround doesn’t help when looking at milestones. If we separate out our “non-pass” tests into their own run, those runs are still calculated in the milestones overall pass rate (which, again, we would find to be inaccurate).
Thanks for your feedback, Ryan, that’s appreciated! Happy to add another vote to this feature request and, despite the age of this thread, it’s definitely still planned to look into this.
+1 on the OP’s suggestion, my team also has the same pain where all test cases “skipped” was factored into the pass rate and shows a much lower quality bar than it really was. This prevents my team from showing TestRail page on our status Kiosk and have to write a new tool just to show the real pass rate!
Thanks for your feedback! We would usually recommend removing those tests from runs (by updating the case selection) and this would also change the statistics and overall pass rate.
Add another vote for me please if this is not already in the works.
Counting the n/a tests against the pass rage of a suite or run is misleading when reviewing results.
Your vote has been added
Please add another vote for me.
The workaround (removing those test cases from the run) does not work for us because we then lose an audit trail. When we descope a test case, we document why in the test case. Removing it from the run removes our ability to do that.
This seems popular and 4 years old. Any updates? +1 from me as well.
Yes, could we have an update on this feature?
The following workflow can not be achieved without it:
Knowing that, here are the possibilities:
How should we deal with this scenario?