Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Use test cases across multiple projects



I am wanting to set up my environment in this manner:

Project 1: Mobile device product family 1
Project 2: Mobile device product family 2
Project 3: Mobile device product family 3

Each of these projects are managed and executed by different QA groups and I need to keep them separated.

However, there are many test cases that are used across each of the projects. For example, the mobile devices tested across each project family is the same - iPhones, Samsung, Motorola, HTC, etc

The devices have the same features/sensors that need to be tested within each product family; so we use the same test cases. Actually it is imperative that we use the same tests to ensure the functionality is consistent. I honestly do not want to have duplicate test cases as that will triple the work in maintaining the test cases.

How can I achieve this requirement? I have not been able to find a method to span across projects as the test cases seem bound to a specific project. Using test plans within a single project is not the outcome I wish to use as that will intermix test efforts of each project.

Any suggestions?



How about using a project and create test plan with configuration?

Then you have one test case but three different test result according to different families.

And we might also need cross project test cases. Cause if we try the method above that means we actually running several projects within one project, right?
Maybe cross-project test case or sub-project is needed.





I am wanting to set up my environment in this manner:

+1 for this functionality.

We are evaluating Testrail and this is real flaw for this test system. Currently the only solution proposed by Testrail is to “keep all projects in one testrail project” if we want to keep the same database for test cases. This way we are forced to use milestone as “project” . This limit functionality.

Also when when we “close” the test runs , they appear in “Completed” section without any milestone reference . So after a while we will have 1000+ testruns from 60+ projects without any way to sort them in “completed” besides “by date”…

I could see sub-project the same way like “Use a single repository with baseline support” type project, except that during test run creation In test suite I can also select test cases from “master” test suite.

We are still technically in the “same project”, so there should not be a problem in term of database reference…


You could have, say, 4 different test cases.

One test case in each of the projects, then an additional library case with all the instructions.

The problem here is that you can’t pass/fail individual steps. What testrail really needs is to support embedding of test cases and more inheritance/polymorphism.


Hello all,

Thanks for your postings. We usually recommend using a single project if you want to use the same test case repository and want to reuse test cases. You can easily test against different systems or environments using TestRail’s test plans and configurations (which were specifically added for this purpose):

To manage multiple, independent repositories in the same project, you can use test suites to separate your cases. This is also often used to organize test cases into different functional areas. Test suites are an optional feature in TestRail and you can think of each test suite as a separate test case repository in a project. You can enable test suite support when editing a project:
(see “Suite Modes and Baselines”)

You can still create a common test plan with test runs deriving from different test suites and also use this in combination with configurations. You can also see the test runs per test suite on the “Test Runs” page when you view a test suite (menu entry in the sidebar on the right).



[quote=tgurock]Hello all,

To manage multiple, independent repositories in the same project, you can use test suites to separate your cases.

Tobias [/quote]

Dear Tobias,

Thank you for you reply and joining the discussion.

As I see it, we could devide our test cases to:

- Core tests ( test cases that must always be tested no matter what and will never change)
- customizations ( test cases that describe client changes. )

Now if we want to use test suite and keep history for test cases we could create:

  • CORE ( master) test suite
  • Client A customization ( baseline)
  • Client B customization ( baseline )

Then each of our test runs MUST be a test plan. In test plan we could create test run for new client release from CORE test cases (to check if our primary functionalities are working) and test run for client’s baseline ( CLIENT A) and consider this as a “whole” test run devided to 2 phases.


We cannot create “empty” baseline , because each time system force us to copy test cases from other baselines.

Our goal is to create TEST CASE that will appear in every test runs we will do for every combinations of configurations/clients/projects/milestones in every possible release.

And this is not an easy task to do in Testrail by default.


Hello Piotr,

Thanks for the additional details. Baselines as offered by TestRail are meant to manage different versions/branches of the same test cases or repository and I think this wouldn’t apply to your situation (as you have different sets of test cases). I think the option of using one core test suite and additional test suites for your client customizations could work well.

Instead of creating single test runs, you would then create test plans with multiple test runs using the following layout (I would suggest creating test plans per client then):

  • CORE
  • Client A customization

If you also need to test this against multiple configurations/environments, you can use TestRail’s configuration feature to automatically create a separate test run for each configuration. You can also link the test plans to milestones to associate those test plans with different product releases.

I hope this helps and I look forward to your reply.