Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Missing hierarchy? Test Run, TEST PASS, Test Plan


#1

We have a situation that I suspect is not unique. We need to qualify “drops”, specific builds with specific changes and specific testing objectives. We also need to assure that essentially all (currently written) tests get executed against the release at some point. Ideally, in a perfect world, each drop would be tested thoroughly with every test; alas, this is not practical. So I need a compromise. I need to run and track specific tests targeted for the drop AND I need to track overall testing progress for the release.

With the current implementation of TestRail, what is my best bet? To date I’ve created Test Plans that incorporate the specific tests for that drop (milestone) and relied upon my superior ( :wink: ) test planning skills to avoid unnecessary duplication and excessive test times for any give milestone specific Test Plan while dropping and adding suites that the team has tested and and not tested respectively to the next milestone Test Plan. Unfortunately there is no one place I can go to track overall progress in this usage model.

Alternatively, I could just create a plan for the release with every test and track the overall release testing progress (and tag tests as pass or retest with version information) but this makes it problematic to see where I am in testing the current drop.

I could run both a release test plan and milestone test plan concurrently but that requires testers to enter results in both places. Obviously that is undesirable and error prone.

Is there a way to summarize multiple Test Plans in the TestRail sense? In other words if I have plans for drop 1,2, and 3 could I create a “plan” that effectively concatenates those results into a summary plan? Alternately could a feature be added to redefine Test Plans as “Test Passes” and then in turn define “Test Plans” as a collection of “Test Passes” (which in turn is a collection of multiple “Test Runs”?


#2

There’s currently no built-in way to track this (and it’s currently not planned to add an additional entity such as a test pass, but I added it to our feature request list), but I think that’s a good use-case for a better reporting functionality. Thanks a lot for explaining your specific scenario, it will certainly help us design future reporting features. We already got a similar but slightly different feature request from another user, and having a good report for this would be useful.

In the mean time, maybe you could use the new export feature (added in beta 1.0.4) and build a small custom tool to analyze the data?

Regards,
Tobias


#3

Sort of related to this thread is my confusion on how to properly use the milestones? It seems that I can associate a milestone with a test run (created as a standalone run) or with a test plan… but not with test runs defined within a plan?

So the only use for a milestone with a test plan is probably for the completion of the overall plan? I can associate a date with the milestone but unless I use it in the run context it’s usefulness seems limited. After all the whole purpose of a milestone is to track progress against plan in intermediate time spans so you can “course correct”.

Is there anything in the current release of TestRail that might help increase the utility of milestones in the test plan context that I might be missing?

Thanks,
Donald.


#4

Hello Donald,

Thanks for your message. Just to make sure that I understand the issue correctly, let me summarize the workflow you are currently using: you want to track builds/drops of your project within TestRail; multiple drops lead to a release. You don’t necessarily want to execute all tests for each drop, but you want to make sure that every test has been executed at least once for a release.

I also assume that you want to execute tests for a specific drop that are associated with changes of the particular drop (e.g. new/changed functionality, bug fixes etc). You are currently using one milestone for each drop and use a test plan to track the progress for the entire release. Now you want to associate specific runs within the test plan to your milestones. Is my summary of your workflow correct?

Yes, this is correct for the current implementation of milestones/test plans. With the current design/feature set TestRail’s milestones are best suited to track actual full software releases/code branches, not necessarily builds/drops that lead to a release (this somewhat depends on the software project though).

I understand that this might not be optimal for longer release cycles and a lot of test cases where you are unable to execute all tests against a new build/drop. We will make sure to

a) consider adding a separate milestone field to runs within test plans and
b) consider adding a report to track which test cases have been executed for a milestone across all test runs and plans associated with a milestone

I can see how this would make it easier for a lot of TestRail users to work with milestones in larger projects. Is there any other suggestion you have that would help with your project? Would a feature to track specific builds/drops as a sub-entity of milestones help?

Thank you.

Regards,
Dennis


#5

[quote=dgurock]Hello Donald,

Just to make sure that I understand the issue correctly, let me summarize the workflow you are currently using: you want to track builds/drops of your project within TestRail; multiple drops lead to a release. You don’t necessarily want to execute all tests for each drop, but you want to make sure that every test has been executed at least once for a release.

I also assume that you want to execute tests for a specific drop that are associated with changes of the particular drop (e.g. new/changed functionality, bug fixes etc). You are currently using one milestone for each drop and use a test plan to track the progress for the entire release. Now you want to associate specific runs within the test plan to your milestones. Is my summary of your workflow correct?

We will make sure to

a) consider adding a separate milestone field to runs within test plans and
b) consider adding a report to track which test cases have been executed for a milestone across all test runs and plans associated with a milestone

Would a feature to track specific builds/drops as a sub-entity of milestones help?

Thank you.

Regards,
Dennis[/quote]

Your summary in your first paragraph is exactly correct. Our system is too large to fully qualify in one drop unfortunately so we want to design test passes that incorporate testing of new functionality for a given drop, core functionality that must function for every drop (Acceptance), plus specific test targets at key phases (e.g. select regression testing at the end of our Alpha phase)

The test run feature in TestRail works well for another of our products that is much smaller and lends itself to a more “Agile” methodology but falls short for qualifying the larger system where it is not possible to test all that needs testing prior to a release within the confines of a single drop.

Your summary in your second paragraph is on target as well. Within the testing of a drop I’d like to know we’ve met our testing objectives for that drop… to indicate readiness for a new drop as well as to drive closure on the test pass, and document what we failed to get to due to resource and schedule constraints. Potentially I could re-target test runs from one milestone to the next if that seems prudent… our just decide that we’ll accept the risk and leave it to document the explicit decision (by the project team).

Additionally, your “a” and “b” proposed items was exactly what I had in mind. “b” may be able to leverage other reporting you’ve already planned for TestRail.

Thanks!

Donald.


#6

Hello Donald,

Thanks for the additional details and for confirming that our summary was correct; knowing your workflow really helps us understand the issue. We will definitely look into ways to make it easier to work with multiple builds per ‘milestone’. We might implement the above mentioned changes or we might have some additional ideas on how to improve this. I will make sure to email you if we need more details or if we have some new ideas in this regard. Thanks!

Regards,
Dennis