Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Recommedations for setting up test runs by environment


#1

I am curious what others are doing to test releases up through the environments? Would it be a good practice to set up Milestones as Release 1.0 the Test Runs as DEV, QA, UAT, PREPROD, PROD or would using the Test Run Configuration feature and set up by Environment names. I am building our QA processes from scratch so looking for best practices. One of the main issues our team has is that they do not know what failed coming up through the environments and we are seeing hot fixes not getting applied to support the environments from a recurring fail.


#2

We build a test suite and then use the tests from the suite in test plans and create test runs for each new environment Dev, QA, Prod, etc. When you go into a run and are working on a test case, you can click into the test case link and look at its entire history of every time the case has been used in a run and the you can see the results. You can also see all defects associated with the test case and when they were created (as long as you link your defect tracking system) and link into your defect tracker for the details of the bug and fix. (We use FogBugz (now called Manuscript) which I can highly recommend!!)


#3

Really I think the most flexible but useful thing for you is to always use a Test Plan so that you can group all of your related testing together as it moves through the environments. This allows you to use the Configurations option, but will also allow you to add additional tests for a specific environment, for example if you do additional data verification once you get to PREPROD and have better data sets.

Within Test Plans, I do like to use the Configurations option. This way, when I add or remove test cases from a run that is using Configs, I know that the test case selection gets updated for all of the environments. This ensures that we’re running the same tests in each environment. (We have thousands of test cases, and even with really good component tagging, etc., test runs sometimes still vary.)

I would recommend using Test Plans with Configurations for your situation, too. All completed runs within the Test Plan are visible, and testers in an environment that’s in a downstream configuration can see failures in previous environments that relate to the cases they’re assigned to execute. They don’t have to hunt through other runs or your bug tracker, they just go to the plan-level view of their assigned run. This will also allow you to track a set of test through all environments. For example, you could create a test plan to track one specific new feature as it moves through your environments and out to production.

We also do our milestones by release, and give each team a sub-milestone per release milestone just to keep things a little more organized. (You could do sub-milestones by Feature if you have a cross-team effort, as well.) This allows us to see how each team is looking as we approach our deadlines, and give some nice sorting options on the Test Runs view.