Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Using TestRail to test multiple simultaneous product releases


#1

I’ve been spending some time playing with TestRail and I really like what I
see – it’s looking really good for such a new product!

However, due to the way our product development is done, I am having a hard
time seeing how to smoothly integrate TestRail into our organization in a
way that is sustainable, particularly as the number of products (and the
number of versions of those products) that we must test increases over
time.

Scanning the forum, it seems that I’m not the only one with these
challenges. Some posts have brought up the need to be able to “tag” test
cases as a way to address their needs. I am not sure I completely follow
how tagging would be implemented, and how well it would map to my
situation, so I guess the best thing to do is to post a description of our
situation, and how I would like TestRail to work with us to meet our needs.

So here goes…

We have a number of different products, each of which normally has several
different versions in the field at any given time, and at least one
not-yet-released version in active development. For example, we might have
the following versions out “in the wild”:

o V1.1
o V1.2
o V2.0

And say that our developers are actively working on a “bugfix” release to
V2.0 (to be called V2.1) as well as development of some serious new
features in what will become V3.0.

Looking at this in terms of test cases, most of them are equally applicable
to all versions of the product; for example, a test case that tests the
ability to login to the product – the login workflow just doesn’t change
from version to version.

However, there are test cases that have more specific version
requirements. For example, they test a feature that is only available in
certain versions (say a new feature that wasn’t in the product previously),
or an old feature that has been retired. It seems that this boils down to
the following general cases:

o Test cases that are applicable to versions up to and including
  version <X>

o Test cases that are applicable to version <X> or later

o Test cases that are applicable to only between versions <X> and <Y>

I would like to be able to maintain a single set of test cases for each
product and, for each test case, specify one of the above “applicability
ranges” (applicable to any version, applicable to versions up to and
including , applicable to versions or later, applicable to versions
between and ).

I would then like to be able to instruct TestRail to construct a test run
for, say, our upcoming version 2.x bugfix release (2.1), and have TestRail
include only those test cases that are applicable to this version.

Of course, this test run might not be 100% accurate – there might be some
test cases that are no longer applicable (for example, an old feature that
has been newly retired); in that case, I’d tweak the test case to note the
version at which the test case becomes no longer applicable. Newly-added
features would require new test cases, of course – they would have their
"applicability" set to “version or greater”.

Ideally, I’d like to be able to have this kind of capability for test suite
sections, and entire test suites themselves.

Of course, I might not have the details right – maybe the assembly of the
appropriate test cases is not done as part of the creation of a test run;
maybe there’s a different mechanism that should be used. But I do know
that without this kind of capability, it would be difficult for us to use
TestRail, and would only get more difficult over time.

Is the goal of TestRail to support this kind of usage? If so, in what kind
of timeframe?

Thanks for putting up with a long post! :slight_smile:

Ed


#2

Hello Ed,

Thanks for your posting, I really appreciate the detailed and clear way you explained your situation and requirements. I agree that we still need to make some improvements to TestRail in order to support such scenarios well.

Our plan is to support such scenarios (and other similar situations) soon. Let me explain what we are planning to do: on the Add Run/Edit Run page, it is already possible to select specific test cases to be include in a test run (instead of including all test cases, which is the default).

This is already useful in some situations, but if you are working with large test suites, it’s often not feasible to select test cases individually. Our plan is to add filters to this page so you can include test cases based on their attributes. For example, this would allow you to only include test cases with a specific priority. Or it would allow you to only include test cases of a certain type. The plan is to also allow you to filter test cases for specific milestones and custom fields.

In order for TestRail to know which test case is valid for which milestone, you would need to specify the minimum and maximum milestone for each case, if applicable. We already have a Milestone field for test cases, and this would become the minimum milestone field. We will either add the maximum milestone field as a built-in field or we will support this with custom fields, we are not yet sure about this.

I cannot say when the filter mechanism will be available. Our current plan is to release a new update in a few weeks. We may be able to include the described enhancement in the update after that. This has definitely a high priority for us.

Thanks,
Dennis


#3

Dennis,

Thanks for the quick response!

About the way I explained my situation, it’s an occupational hazard – my job title is “functional architect”… :wink:

The feature you describe sounds like it would be quite useful. Thinking about it a bit more, it seems like the concept of “minimum milestone” and “maximum milestone” implies some kind of value comparison between the milestone specified as part of the test run, and the milestone values present in the minimum and maximum milestone fields.

The comparison of milestones can be problematic – you can’t really use sort order of the milestone name string as the basis of the comparison, as versions (which is really all a milestone is) can have any number of formats, many of which will not sort correctly.

That means that, to support the concept of minimum and maximum milestones, you’d have to somehow capture the user’s desired ordering of the milestones in TestRail, and use the milestones’ relative positions in the ordered list of milestones as your “sort order”. This shouldn’t be too difficult – you could use the order of the milestones on a project’s milestones page as the “sort order”, and could include something like up- and down-arrows on the page to allow the user to change the milestone ordering.

This kind of implementation should make even it possible for the user to change the milestone name without impacting the “sort order” – a benefit for those organizations that do things like use internal codenames for releases in development that are then changed to numeric versions when released. Like we do… :wink:

So although your interface would display it in terms like, "milestones greater than ", the underlying code would actually be doing something more like "milestones with sort orders greater than the sort order of "

Sorry for another long post – my excuse this time is that I’ve done Linux packaging with RPM, and am intimately familiar with RPM’s challenges figuring out whether one version number is greater than another. If you’re interested, google for “rpmvercmp” for the horrible details… :smiley:

Thanks again for being so responsive!

Ed


#4

[quote=dgurock]Hello Ed,
I cannot say when the filter mechanism will be available. Our current plan is to release a new update in a few weeks. We may be able to include the described enhancement in the update after that. This has definitely a high priority for us.
[/quote]

Hi Dennis, I was wondering if there’s any chance we can beta (or even alpha) test the upcoming release? We get to play with new stuff and you get feedback and a bit of QA done for you! win-win! :slight_smile:

Cheers,

Og


#5

Hello,

Makes sense. :slight_smile:

You are right, we would need to have a way to compare milestones in order to support such filtering mechanisms. We currently sort milestones based on the entered due dates. This makes sense for the current purpose (showing milestones in the order they are probably going to be released / tested). But it would not work well for the scenario we are discussing. E.g a maintenance release for branch 1.x might get released after 2.0, but this shouldn’t have any impact on the milestone ordering for the test case selection.

One thing I’ve been thinking about is providing a way to specify the milestone version in a more structured way. If TestRail supported entering actual version numbers in a format such as major.minor[.build[.revision]] (example taken from Wikipedia), TestRail would be able to use this information to compare milestones. Now, not all projects use versions like this and we would need to make sure that this is optional/flexible.

There are both advantages and disadvantages compared to the solution you suggested (allowing users to manually specify the milestone order). I would hope that this would make it easier for customers to work with milestones without having to think about ordering, but this may be an abstraction that’s not so obvious. We definitely have to think more about this.

Thanks,
Dennis


#6

Hello Og,

We have not started working on the filtering mechanism; we are currently working on the next update and other enhancements (we may have this in the update after the next one, but this has not been decided). We generally like the idea of beta tests (we have been running a comprehensive ~4 month beta before the 1.0 was released). But because the current version has a lot of ‘low-level’ changes and we are still changing quite a few things between internal iterations, a beta version would not be very useful at this point. We will definitely think about this for other updates though. :slight_smile:

Thanks,
Dennis


#7

Be careful – it’s dangerous to assume you can solve this problem for a majority of cases (again, google rpmvercmp)… :slight_smile:

Oh, I will be the first to admit that my suggested solution isn’t without its own problems! :slight_smile:

One other approach I thought of would be to have milestones be multi-selectable attributes of each test case (like the current milestone field you have now, except multiple milestones can be selected). Then, when a new test run is created, the milestone specified is compared with all the selected milestones in each test case, and if a match is found, the test case is included in the test run. Of course, this makes the maintenance of the test cases a bit more cumbersome, as when a new milestone is created, every test case must be reviewed (and, if it is still valid for the new milestone, have the new milestone added to it).

Of course, you could make a good case for saying that should happen anyway… :slight_smile:

Ed


#8

[quote]
Be careful – it’s dangerous to assume you can solve this problem for a majority of cases (again, google rpmvercmp)… :)[/quote]

I agree that we need to think more about this. :slight_smile: A good compromise would probably be sorting the milestones by the due date by default, and allowing users to change the order of milestones manually, as you suggested. We will look into this and will also brainstorm a few additional ideas and possible solutions. Thanks again for the suggestions!

Regards,
Dennis


#9

Dennis,

Thanks for taking the suggestions in the spirit in which they’re intended! :slight_smile:

Ed


#10

Hi all,
What is the status of this filter feature? Any news appreciated.
Stefan


#11

Hello Stefan,

Thanks for your message. We don’t have an update regarding the availability of the test case selection filter, but it still has a high priority on our feature list. We have also already designed most of the specifications for this feature and we will hopefully be able to add this to the next larger feature update(s).

Thanks,
Dennis


#12

Hi,
I just started to use Test Rail free trial and I am deciding if I want to purchase the product. I like what I see so far, but I am a little confused on how to make a copy/template of a milestone so it can be used over and over again. I created a milestone with all of my test suites/plans in it. No I want to make a clean copy for future use.

Thank you,

Wendy Antopolsky
Predictive Service
QA Tester


#13

Hello Wendy,

Thanks for your message. You can start multiple test runs for your test suites over time so you can simple start new test runs for your new milestone. If you would like to make it easier to start new test runs for new milestones, you can use test plans like this:

  • Start a new test plan (via the Add Test Plan button in the sidebar of the Test Runs & Results tab)
  • Add your test suites to the plan, select your milestone and save the test plan
  • For the next milestone, you can easily create a similar test plan by clicking the Add Test Plan button again and then loading your previous test plan as a template via the Load Test Plan button in the sidebar of the Add Test Plan form

I hope this helps. I’m happy to explain this in more detail if you like and you can also email us at contact (at) gurock.com.

Thanks,
Dennis