Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

How do you integrate Automation with TestRail Test Runs


I am new to TestRail and I am looking how difficult it would be to integrate my automation to publish results to TestRail. One issue that I am facing is with ID"s for the TestRun. Each time I create new TestRun test id’s change which means I would have to change id mapping in my automation as well. Am i missing something here or is there a different way of doing this. I would imagine TestID’s would remain the same but TestRun ID would change or something like that. Any help would be appreciated.

Best Approach for Test Plans

Have you considered using the add_result_for_case API method? This method uses the case ID in conjunction with the Test Run ID (as you have imagined it should work)



Thanks for your posting. As Glenn already suggested (thanks!), we also recommend using the add_result_for_case API method for this. The different to add_result is that add_result_for_case works with the case IDs (C#) which remain the same (instead of the dynamic test IDs T# which change for every test run):

There’s also a related method for adding multiple results in one step which is more efficient than calling add_result_for_case separately for many results. The bulk-add method is called add_results_for_cases and is the recommend method for integrating automated tests with many tests/results:

I hope this helps!



I’ve had excellent results publishing results to TestRail using the methods described.

What I’d really like to see is the ability to arrange an automated test run and kick it off from within TestRail.

This might involve a special class of tests for automation and some ability to tie in with Maven or something. I’m a QA guy first and a developer second, so it really kind of makes my head hurt at this point to think of how best to do it. :wink:

Has anyone approached anything like this?


There is some official documentation explaining how to do this here:

We haven’t tried it yet, but it’s something I’d like to explore :slight_smile:


I guess the simplest solution would be to work with Jenkins together.

You can already start Jenkins jobs via remote URLs with passing parameters.
Add a button in TestRail with a UI script, that triggers the specific Jenkins URL.



Has someone tried Daniel’s solution?

We also need to have runs of both tests manual and automated in TestRail.
Triggering from TestRail would be really heplful.

Right now, we are thinking about adding a test plan for automated tests using the TestRail API but triggered by Jenkins.



Hi all,

We can recommend the following trigger example to trigger/start automated test runs from TestRail (as Glenn and Daniel already mentioned):

The trigger backend example script (trigger.php) doesn’t need to be on the TestRail server and can be any URL that can trigger your automated tests.

The other alternative is to trigger your tests outside of TestRail and then start test runs/plans via the regular API methods (add_plan/add_run):



I am trying to do exactly what sbiznis is asking. I would like to integrate my currently running rspec to my testrail tests. Currently I plan on having daily jenkins jobs running my automation, So I need to come up with a script to parse my results, create a plan, add tests by caseID (C#) and add results to a plan. Does anyone have any sample code that does this since the “add_result_for_case” requires :run_id which will be dynamic in this case?
Also the other issue I have is that my test cases reside in 4 different projects. If I use the “add_plan” method I need to know what project my test case resides in. So my 2 options are to consolidate all of my test cases into one project, or to determine which project each test case is in before I can create the plan. Any other Ideas?

Sample code would be so much appreciated to
->create a plan based on a set of test case IDs (c#s)
->add results to this newly created plan based on test case IDs (c#s).

I hope this all makes sense. Thanks in advance.



Thanks for your posting. We have ready to use bindings for various programming languages to access the API (which handles the JSON parsing/formatting, authentication etc.):

Regarding the case IDs: you would usually create test plans and runs during your automated test runs (e.g. via add_run/add_plan) and can then pass those IDs to add_result_for_case. TestRail needs to know the test run ID as there can be multiple active test runs for a single test case.

The easiest and most efficient way to handle the case/project mappings would be to store the project ID together with your test cases as part of your automated tests.

I hope this helps!



[quote=tgurock]Hi all,
The trigger backend example script (trigger.php) doesn’t need to be on the TestRail server and can be any URL that can trigger your automated tests.

This relates to the question I asked in the other forum (sorry for the cross talk ; ) in that it seems that the backend example script (trigger.php) does have to be on the TestRail server. I say this hoping very much to be proven wrong! I am finding that if I put in a full, outside url in the trigger.ui script (“http://myserver:8081/automation/trigger.php…”) the php script doesn’t get triggered because it’s on an outside server (my TestRail is currently a hosted account).

If there’s a way around this I’d love to know it!




Hello Kent,

You can look into using a regular non-ajax POST in this case:



We have implemented full Jenkins integration using UI scripts. We are able to select individual tests, or run tests on full suites based on selection or status. It took some work to get there but it works very well.

We also kick of tests from Jenkins direction which queries TestRail for available tests to run through custom APIs.


We have a similar situation where we have an existing plan consisting of manual and automated test cases where we want the automated test cases to log their results. The problem is finding out the run id for the test case so that you can easily log the result with add_result_for_case. I can accomplish this by parsing the response from get_plan and then getting a list of the run ids. Then I can loop through these and match up the test case with the run id, but that seems like a lot of overhead to log a simple result.

It would be nice if we had an api call for the /index.php?/cases/results/ page in the gui and then we could get the run id of most recent “Untested” result. Or maybe even make the run id optional in add_result_for_case and have it automatically look up the run id for the most recent “Untested” result.


That sounds great, Chris! Combined with the API, you can implement quite a few things with UI scripts but I agree that this could be made easier (and it’s planned to look into this!).

Darrel: thanks for your feedback on this! We currently require the run ID because there can be multiple untested tests for the same test case in a test plan (e.g., with configurations). But I agree that your use case should be easier to implement and I’ve added this to our list of things to look into.




I just finished doing this. Basically I accomplished this in a few steps, the last remaining step is integrating it with our CI system, but it should be fairly simple.

I created a Rake task that basically sets all the env variables I want for an automation run. This rake task executes specs via parallel_rspec. The rake task is responsible for creating the new test run, and storing the test run id as an env variable, which is passed into the automation.

I also wrote a custom rspec formatter that collects the results of the test scripts and posts the results to the test run via TestRails api. The formatter basically extends dump_failures to uniquely report failures to the test cases, and also includes the exception that caused the failure to be added as a comment to the result. The formatter extends dump_summary to report successful test runs, because unfortunately Rspec doesn’t include a helper for reporting successes. You can report failures without using a custom formatter, but it’s a bit more reliable to do it with a formatter (errors in before/after blocks can cause tests to report as pass when they should have failed, access to failure reasons).

To link the scripts to the TestRails test cases, I prepended each script with its TestRails test case id, and just have my automation strip it off.


Hi @chrisk , can you share the sample code of how you integrated the Jenkins automation job with UI scripts?

I am a novice in PHP, but have a pretty solid expereience in Java, in fact i am using the data driven automation framework with Selenium, TestNG, Maven and running these tests on cloud platform using Sauce Labs, and use Jenkins as my CI tool for running auto builds with deployment pipeline.

I am planning to move from Zephyr to TestRails and exploring it to make sure we are able to integrate our automated tests with our IDE and run the test runs from TestRails for small releases rather than using Jenkins to run all the smoke/regression test suite.

Can you guide me with how have you called the Jenkins job to run individual tests or a test suite using UI scripts.



Is still the preferred way to kick-off automated tests from TestRail?




Hi Lisa,

Thanks for your posting. This trigger script is just an example and we usually recommend triggering tests outside/independently of TestRail and then submit test results via TestRail’s API:

Starting the tests outside/independently of TestRail is much more flexible than starting them from TestRail’s UI and most teams simply submit the test results to TestRail as part of their test automation/CI systems (or as a post step). You can also learn more about other teams are integrating their test automation on our blog:



My solution for the mapping of our Jenkins tests (nosetests) to testrail testcase ids was to write a nose plugin ( The plugin allows us to annotate our Python tests with the corresponding TestRail testcase Id. See the README for more info, but here are the basics.

Annotate a test with some annotator key/value. In this case I am using a Testrail testcase ID
@nose_annotator(‘testrail’, ‘2729’)
def test_numbers(self):
assert 1 == 1

On running the test(s), the nose-annotator plugin will output a file called “testrail_mapping.csv” which will list our test class and function along with the testcase_id.

Ex: testrail_mapping.csv:
TestSampleClass:test_numbers, 2729

We take that output file, and merge it with the results output and use the TestRail API to create a test run and POST each of the results.

Note that this is not a testrail specific annotation plug-in. You can use it to annotate any test with any values and even have multiple decorators. See examples in the Github repo.