Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Am I using Test Rail optimally? How does everyone else use it?

I am approximately 6 weeks into using TestRail for the first time. As much as I am enjoying it, I feel like I am not operating it optimally regarding Milestones, Test Plans and Test Runs, executing test cases and Reporting.
I am the sole user, working in a 2 week sprint cycle. Currently it’s only capturing manual testing using mostly exploratory testing and functional testing.

I have created 1 milestone for Release v1 of the app. Next I have sub-milestones capturing each 2 week sprint.

  • Release v1
    • Sprint 3 etc
    • Sprint 2
    • Sprint 1

So far I quite like this structure. Each sub-milestone has multiple Test Plans contained within, and I can generate reports at the end of the sprint for the sub-milestone (sprint).
However I am unsure whether to close a sub-milestone (sprint) with failed test cases and active defects, knowing fine well that they may well be resolved in the following sprint. Should I close a sub-milestone with failed test cases and carry that failed count into the parent milestone permanently? Or should I keep the sub-milestone open until the failed test cases can be passed? Therefore not saving failed test cases into the parent milestone completion %?

  • Release v1 ( 76% pass rate) (active)
    • Sprint 3 (75% pass rate) (active)
    • Sprint 2 (85% pass rate) (closed)
    • Sprint 1 (70% pass rate) (closed)


  • Release v1 ( 91% pass rate) (active)
    • Sprint 3 (75% pass rate) (active)
    • Sprint 2 (100% pass rate) (closed)
    • Sprint 1 (100% pass rate) (closed)

Additionally, I am creating Test Plans within each sub-milestone (sprint). I have explored different approaches to this.

Do people prefer to track a user story per Test Plan and have a say an exploratory test run and a functional test run within? Therefore multiple Test Plans within each sub-milestone?

Or a Test Plan for say exploratory testing, functional testing and other sprint specific objectives?Within each test plan could have a test run for user story etc?

With reporting I have found the Status Tops (Cases) and Summary for Cases (Defects) most insightful so far.
I’m curious to hear what reports other testers value?

Any insight on fellow testers strategy and high level use of test rail would be much appreciated.



Test plans with failed test cases at end of submilestone:

I move the test run/plan to the next sub-milestone (of the current release) when the associated bugs will be fixed within the current release (parent milestone)
I close the test run/plan with failed steps when associated bugs won’t be fixed in current release

I use one test run/plan for each user story and add additional regression test runs

I only use reports to keep track of test case sanity health / automation percentage / sprint closure activities, NOT the sprint progress itself. The latter is done from the milestone statistics

1 Like

Hi Jasper, thanks for sharing that. It’s nice to know I’m not too far away from how you implement your workflow.

Really appreciate that.