Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Parallel testing and managing scope of test cases in the Cucumber run


#1

Hi, Upon creating a run using the api, we use include_all as true, because at runtime, we don’t know which test cases Cucumber executes. Each parallel thread will add results to the test run as scenarios complete. No problems here.

Once parallel threads are finished testing and publishing results, we need to close the run. But at this point, include_all should be false, and only include the tests that executed.

How do you best recommend efficiently fetching all the test case ids in the run that contain statuses Passed or Failed, so I can use that list in update_run with case_ids?

As a workaround, I’ve tried having each parallel thread update the test run description with the list of test case ids it ran, but sometimes test cases get left out due to race conditions between threads, retries on 500 deadlocks due to conflicts posting with other threads, and ensuring all case_ids had made it into the description before changing include_all to false.

A second workaround would be we could fetch all tests for the run, and get_test on every one to compile a list of associated test case_ids. Wouldn’t this add to our http traffic and add more to the rate-limiting problem?

A third workaround I thought of would be to move @wip tests etc into a suite called not ready, and just have include_all always true. This isn’t ideal, but seems it is the most straight forward alternative.

A quicker way would be if you could supply a means to narrow test case scope somehow to only those who executed so we can automatically scope and close the test appropriately. This seems practical with parallel runners everywhere. Could we have this feature request or implement something soon with the api for this?

Do you have any clean suggestions?


#2

Hi Matthew,

Thanks for your post! You can use the get_tests method to get tests in bulk as this wouldn’t run into the rate limit because it’s just the single request (and you can also use a request filter by status to include all but the ‘Untested’ status e.g. “index.php?/api/v2/get_tests/1&status_id=1,2,4,5”). You would then need to process this response with your script to build your case_id array for the update_run request:

http://docs.gurock.com/testrail-api2/reference-tests#get_tests
http://docs.gurock.com/testrail-api2/reference-runs#update_run

This would be the recommended approach currently, however we’re also happy to review adding a built-in method to support stripping the test run of tests that weren’t executed/tested. Hope this helps!

Regards,
Marco


#3

I would have thought the recommended approach would be to use get_cases to obtain the list of applicable cases, filtering those cases based on other fields within TestRail test cases that link back to the cucumber scripts, and then using the Plans APIs add_plan and add_plan_entry to build the plan accordingly.

When it comes to Test Execution, each parallel cucumber test runner thread could query the tests in the plan and decide what it wants to run.

While I think about it, it would be great if there was a get_cases_for_project API which returned cases for all suites so you didn’t have to make multiple calls when you have > 1 suite in your project.


#4

Ahh, got it! I was too quick and didn’t read it all, didn’t notice the ..
This helps! Thanks, MK
[
{
“id”: 1,
“title”: “Test conditional formatting with basic value range”,

},


#5

Hi Glenn,

In this scenario I believe Michael was needing to know which tests had been executed in the run, and for this you’d need to use the get_tests method as the get_cases method response content wouldn’t include any test execution details. With the get_tests method he can use the request filter to find tests in the test run excluding those that were left as Untested. The get_tests method would also provide the Case ID in the response content, and then this can be used to build the array when updating the test run to exclude those that were Untested. Hope this helps to clarify that a little bit, although there is likely multiple approaches to accomplishing the same based on the automation environment outside of TestRail.

Regards,
Marco


#6

Hi Matthew,

Glad to hear this helps! Just let us know if you have any further questions.

Regards,
Marco