So here is the situation:
As we are going through acceptance testing, we create a Milestone for the release and then we create a test plan for the specific SW build we are on. We begin testing and as we identify issues, we adjudicate the ones that require fixing. Once we have those fixes ready, we will re-run the test plan under for the new build and include all procedures that were not tested, blocked or failed. If we think we need to re-run any of these procedures (based on rIsk) we will pull them over by hand.
At this point, we typically close out the initial Test Run so we don’t get accidental updates. All of the untested/failed/blocked procedures are completed in the next Test Run. The problem now is that if I generate a summary report for the milestone, I am getting a large number of untested procedures that WERE actually tested the final Test Run. Is there a way to generate a report over the entire test plan that simply shows what passed/failed and where? I don’t want the untested procedures counting against us in the summary report, but the only way I can see to manage that is to remove the untested procedures from the initial test run AFTER I re-run it.