Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Using TestRail add_run/update_run API calls for automated test runs


#1

Hi there,

I’ve recently worked on integrating our Specflow based functional test framework with TestRail so that it can create a test run and post results for the Gherkin scenarios to that test run for reporting. It is working okay but the suite that I create the test run from is not fully automated by Specflow. Currently when I create the test run using add_run I set “include_all” to true. This means that as I run the tests, those that are in that suite that are not automated remain “Untested” (sensible in a way I guess) while the automated tests are set to pass or fail as appropriate. Problem with this approach is that from my point of view unless I move out all non-automated tests from this suite the results and reports always a bit strange.

So I’m thinking of the approach where you add_run but set “include_all” to false instead. What I’d like to confirm is the behaviour regarding “case_ids” - if you start with an empty set “[]” in add_run is every subsequent update_run with “case_ids” going to be the union of all case ids from previous invocations of update_run? Or will I have to resend all previous case ids plus the new case id that I want to add when I invoke update_run? The online documentation for update_run wasn’t absolutely clear on this.

Hope I have explained myself clearly enough.

Thanks!

Regards,
Derek.


#2

Hi Derek,

Thanks for your posting! The case_ids field would be overridden in this case (otherwise you couldn’t reset/empty this field via update_run if this would be the union of all previous cases). We would recommend setting up the case selection once when you create the test run and the typical approach would be as follows for a custom selection:

This is also much more efficient than calling update_run for each test or results and the recommended way to handle a custom selection via the API.

I hope this helps!

Cheers,
Tobias


#3

I am also trying to add_run/update_run to create a run from one suite but also add some cases from another suite. Is this possible?
Using this:
apstr = ‘add_run/’ + str(projectID)
result = client.send_post(apstr,{‘name’:testRunName,‘description’:‘Test suites being run on QA’,‘suite_id’:suiteID,‘include_all’:True,‘case_ids’:[5567845]})

The one test case is from a different suite than that referenced in the suiteID. It is a legitimate case suite. If I set include all = True, just the test cases from the suite are included, if I set include all = False, no test cases are added.

Likewise, if I don’t add any case numbers other than the test suite and try update_run, no other test cases are added? Is this expected behavior?

Thanks,
Christi


#4

Hello Christi,

Thanks for your posting! A test run is always linked to a single test suite but you can add a test plan to start multiple related test runs (also for different suites):

http://docs.gurock.com/testrail-api2/reference-plans

If you set include_all to false, you would need to specify the entire case selection via the case_ids attribute (expects an array of case IDs).

I hope this helps!

Cheers,
Tobias


#5

Hello-

I am trying to integrate Test Automation Framework with Test Rail. I am exploring TestRail API V2 for integration. I am more particularly looking at “add_run” call.
Using this api call I noticed that it creates a Test Run with Default Name (“Master”) and includes all the test cases in the mentioned project ID, even though “include_all” = false.
Here is the sample payload I am using:
{
“results”: [
{

        "name": "Automation Integration Test 1",
        "assignedto_id": 20,
        "include_all": false,
     "case_ids": [58086,45702]			

	}
]

}

Is this a know defect? Or could you please guide me on how to consume this API call?

Thanks


#6

Hi!

You are using a slightly incorrect API request and you would need to specify the name, assignedto_id etc. attributes directly on the root level of your JSON payload. You grouped these attributes under an additional results attribute instead and this convention is only used for the add_results and add_results_for_cases API calls. The add_run call is much simpler and looks as follows:

{
	"name": "This is a new test run",
	"assignedto_id": 5,
	"include_all": false,
	"case_ids": [1, 2, 3, 4, 7, 8]
}

I can also recommend using one of the API bindings instead of raw JSON/HTTP and the bindings are currently available for Java, .NET, Python, PHP and Ruby:

http://docs.gurock.com/testrail-api2/start

I hope this helps!

Cheers,
Tobias


#7

Hi!

I am working on a similar mechanism for my automation to eliminate “Untested” results within a test run.
I’ve come up with a solution that creates a test plan entry with include_all=true and then incrementally updates case_ids of the test plan entry (with POST to /update_plan_entry) each time it stores new test results. It works fine provided I don’t use “Configurations”. Once I add config_ids when creating a test plan entry, I no longer can eliminate “Untested” results because I’m updating test plan entry but the configuration itself has it’s own set of test cases, which by default is set to all. I can change it from UI but I can’t see an API method to update this setting. Any thoughts on that?
Thanks!

Kind regards,
Lukasz


#8

Hello Lukasz,

Thanks for your feedback! We would usually recommend setting the case selection before starting your automated tests (based on the set of tests you plan to execute) and then going through the automated tests. Updating the case selection for every result/tests is not very efficient and would involve extra API calls and changing the case selection all the time. The ideal and fastest approach would look as follows:

  • Create a test run or plan with include_all or a custom case selection (with the set of tests you want to run)
  • Use add_results or add_results_for_cases to add all test results (or maybe chunk-wise)

Cheers,
Tobias


#9

Hi Tobias,

Thank you for your fast answer.
Updating case selection each time I store results is not an issue in my case because I do it in bulks using add_results_for_cases (and not separately for each case). So this is really few additional API calls per test run.
What I am missing in the API is the possibility to update case selection for a configuration, so I don’t end up with a situation like below:

Kind regards,
Lukasz


#10

Hello Lukasz,

You can update the case selection for a test plan entry (update_plan_entry). This will also set the case selection for the configurations, unless the configuration has overridden the case selection. Updating an overridden case selection is currently not directly possible via the API (only via the UI) but it’s already planned to look into this. As a workaround, you could look into adding plan entries with just a single configuration/run and using the entry-level case selection. This way, you can update the case selection of the plan entry and this updates the case selection of the configuration in the same step (update_plan_entry). Would this work for you?

Cheers,
Tobias


#11

Hi Tobias,

Thank you for the explanation. I’ve analysed the way I’m creating a test plan entry and I found that the problem was that I was overriding the plan entry case selection by including include_all=false in the actual run, like that:

{
    "suite_id": 33,
    "name": "test_50",
    "config_ids": [ 23, 26 ],
    "assignedto_id": 84,
    "include_all": false,
    "case_ids": [1],
    "description": "test_50 description",
    "runs": [
        {
            "assignedto_id": 84,
            "config_ids": [ 23, 26 ],
            "include_all": false
        }
    ]
}

Now, I’m setting include_all=false for the test plan entry only:

{
    "suite_id": 33,
    "name": "test_50",
    "config_ids": [ 23, 26 ],
    "assignedto_id": 84,
    "include_all": false,
    "case_ids": [1],
    "description": "test_50 description",
    "runs": [
        {
            "assignedto_id": 84,
            "config_ids": [ 23, 26 ]
        }
    ]
}

… and that works perfectly for me, because the configuration/run inherits the case selection from the test plan entry:

Thank you for your help!

Kind regards,
Lukasz


#12

Hello Lukasz,

That’s great to hear :slight_smile: This is also what I meant but I thought you might be using multiple configurations in one test plan entry and the screenshot you posted was just an example. But it appears that you are already using just a single configuration per test plan entry only so this approach works well.

Cheers,
Tobias