Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

How to get test case results in Milestones to overlap?


#1

Hi
I’m having an issue getting accurate testing reports for milestones.
I have set it up so that i have one or more test runs per sprint, and have these associated to Milestones (currently through sub-milestones per sprint, but that can be changed). The test cases assigned to each test run varies, but often the same test case are repeated in several test runs to verify fixes etc.

Now, when I look at the status for a given milestone, I am interested in the latest result for any one test case, but what I get is every individual result.

As an example, I have two test runs with 12 and 13 test cases. 4 of those overlap, so there are 21 unique test cases. When I look at the result for the individual test runs, I want to see 12 and 13 results respectively. And I do.
But, when I look at the result for the milestone, I want to see the results for the unique test (21), but instead I see 25. In my specific case, 4 of those first failed, but later passed. In the status report there will now always be 4 failed tests, giving the wrong impression to the stakeholders.

Am I doing something wrong, or should I associate my tests and test cases differently?
I have added an image describing my structure so that it is easier to understand:


#2

As an update to this, I can see that every test run creates it’s own version of the test case, using the id Txxxxx. I can certainly see the reason for this, especially combined with the Configuration options in Test Plans (which I will probably use in a later test level).

So, this leads me to believe that it is my structure that needs to change. How do other users organize their sprints and test cases that overlap several sprints, so that the status reports don’t include old outdated results?

As a sidenote, I see that Comparison for Cases report more or less gives me what I want, so i guess that will be a work around for now, but I guess what I am looking for is a similar report as that for milestones and projects. Long term it should differentiate between test executions with different configurations, though.

So, tall order. Any thoughts or ideas?


#3

This is exactly th problem I face. I’ve been searching for ways to avoid this even trying report generation but no luck.