We have a situation that I suspect is not unique. We need to qualify “drops”, specific builds with specific changes and specific testing objectives. We also need to assure that essentially all (currently written) tests get executed against the release at some point. Ideally, in a perfect world, each drop would be tested thoroughly with every test; alas, this is not practical. So I need a compromise. I need to run and track specific tests targeted for the drop AND I need to track overall testing progress for the release.
With the current implementation of TestRail, what is my best bet? To date I’ve created Test Plans that incorporate the specific tests for that drop (milestone) and relied upon my superior ( ) test planning skills to avoid unnecessary duplication and excessive test times for any give milestone specific Test Plan while dropping and adding suites that the team has tested and and not tested respectively to the next milestone Test Plan. Unfortunately there is no one place I can go to track overall progress in this usage model.
Alternatively, I could just create a plan for the release with every test and track the overall release testing progress (and tag tests as pass or retest with version information) but this makes it problematic to see where I am in testing the current drop.
I could run both a release test plan and milestone test plan concurrently but that requires testers to enter results in both places. Obviously that is undesirable and error prone.
Is there a way to summarize multiple Test Plans in the TestRail sense? In other words if I have plans for drop 1,2, and 3 could I create a “plan” that effectively concatenates those results into a summary plan? Alternately could a feature be added to redefine Test Plans as “Test Passes” and then in turn define “Test Plans” as a collection of “Test Passes” (which in turn is a collection of multiple “Test Runs”?