Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Best approach to managing different configurations with test case variations


Hi all,

I’ve been trying to figure out what the best approach would be to handling similar test cases, with small variations based on different configurations, in a way that would allow me to merge changes that affect all configurations, all at once.

So more specifically: we have an app, supported on different platforms, with all the same features, but some features end up with slightly different expected results based on the particularities of the platform. We do try to keep the app consistent tho, so most likely, a feature change would affect all platforms at once (but not the other way around: a platform-based variation wouldn’t affect the entire feature).

So far we’ve been keeping separate projects for each platform (managed by the platform’s separate qa teams), which leads to inconsistent testing, feature changes not being propagated effectively enough through the different test suites, duplication of work (with the same test cases being created separately everywhere), and in general, a bit of a painful maintenance process.

I looked into Baselines, but baselines create branches but don’t merge changes done in Master into the baselines, so doesn’t solve the problem. I also checked out the configuration feature, but it uses the same test cases, and doesn’t allow for test cases variations (also implies all platforms are tested at the same time, which is also not the case).

Is there any way of not having to duplicate all our test cases onto different projects, to support variations per platform?




Hi Lula,
I certainly hope you get some good feedback on this question as it’s very similar to my situation. I have on-prem application that we’ve been developing/testing for years. We are now working to bring that application to the cloud. This would involve a lot of just running the same tests, along with the addition of tests targeted just for the cloud environment.

My hope is that I wouldn’t have to duplicate thousands of test cases, as that would make it much more difficult to keep the test cases that run in both environments in sync.

Crossing my fingers that someone can help guide to a better way…

1 Like


Hi Lucila, Fran,

Thanks for the posts! TestRail does not currently have a feature where a test would have multiple concurrent ‘versions’ based on different configuration values such as platform, O/S, etc.

However, TestRail’s configurations feature is designed to quickly make test runs to cover a variety of possible situations for your testing needs. When you create a set of test runs using Configurations, and select options from more than one group, TestRail will create runs for each possible combination of entries within each group. You can then deselect the configuration combinations which are not being tested at that time, so you would not need to test call configurations with each test run or plan. Our video course also provides an example for this, which may also help you with the feature:

With your implementation, each test case in TestRail would be the same when used in each configuration. That being said, it would be possible to create multiple test cases and use a custom field to specify configurations which would apply to the test case. Then, when creating test runs and plans, you can only include the cases which would apply to the configuration under test. So, even though you may have a test plan containing test runs for multiple configurations, each run can contain a different set of tests to match your environment under test.

I hope this information is helpful!



Hi Jon,
Thanks for your reply. I’m not sure this solution would help us much, as it sounds like we should still end up with a massive amount of repeated test cases for the different configurations.
Do you know if there are any plans for TestRail to start allowing for different expected results on one test case, or something along those lines?




Hi Lucila,

Thanks for the feedback! While I have added your vote to a feature request for additional test case versions based on configurations, I currently do not have an ETA or timeframe for this feature as of yet.

As an additional suggestion, you could use the separated steps fields to hold your steps and expected results for each configuration. With separated steps, you would not need to submit results for these steps and would be able to choose the fields under which you do enter your actual results. You can also utilize markdown formatting to enter tables into your test cases if this formatting helps you as well.