Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Test Automation Integration Options


#1

We’ve recently moved to TestRail and with the first phase of the migration being get to migrate the test cases (manual and automated) and agree the test process. Were now looking at integrating the automation. Currently, each project has the following basic principles:

  • Milestones for each release
  • Test Runs for each sprint with each run incorporating some regression (automated and manual)
  • 1 Milestone Regression test run to incorporate a focused pre-release regression run.
  • We then create a milestone test report to formally close the milestone and test runs.

I’m now starting to look into is integrating the automated tests from our selenium frameworks.

We have CI automation (currently these results are not being recorded yet in test rail) and in some cases on demand automated tests that are executed that we are recording results manually.

Before I start on implementing an integration, I want to understand a things from a best practice process and reporting view.

  1. When adding the integration to your test code, whats the best way to identify the run id? Do you have to somehow change an internal configuration setting so that you update a runid that the tests read, do you have a standalone Test Run in test rail that is always used to record the results or do you create a run using the API?

  2. For CI I’m particularly interested in reporting practices. The CI runs are against builds not sprints and of course they run everyday. I can’t currently marry up in my head daily results coming into TestRail and reporting the results against a sprint or milestone. How should CI test runs normally be reported in Test Rail? Do they have a standalone test run where results are recorded and report on the run separately or is as described above, where you update a config file with the new run id that the framework reads so that you incorporate the runs into a sprint at a given point in time?

Any insights into the above would be greatly appreciated


#2

It depends on your code - I use python and I just set globals to pass data about the Test Run around. I use things like TestRun.testrail_toggle = True when I am creating a test run in my code. I have all tests loaded through a single file that is called by our CI/CD stuff (devops) - they pass in a few arguements to that file, and that file ‘enables’ test rail, sets a few other globals, then loads all tests based on the test_type argument they pass in. This way I have complete control of what tests run, where. And they just have to worry about passing in a few arguments to a script. All of my tests inherit from a base test file. In the base test file, I have a ‘cleanUp()’ method that is called after every test method… I have a line that says if TestRun.testrail_toggle = True: set the result for the test case. I created custom decorators in my python code to assign the test_case_id attribute to my test methods i.e. @test_case_id(‘C12345’) would be sitting right above my test method that is automating whatever the C12345 test case in test rail specifies. This is what gets read when the base test cleanup method gets called to set the results in or add the case to a Test Run.

If you’re interested in my framework, i can show you some examples. I’m planning on breaking it out and putting it in github and pypi soon.


#3

@ghawkes check this out!!


#4

Thanks @bschwen, I’ll have a look through your code. I’m working in python too and have a very basic integration set-up for now as a first step so I’ll have a look at your code to see if I can adopt some of your principles. Cheers


#5

Hi @ghawkes!
Great questions. I’m not sure there is a best practice that would help answer those questions… I’ll explain how our current set up works to give you a few ideas about a way to go.

  1. Most tests don’t report back results into TestRail. We only report results back when we want wider access to results - i.e. at an end of a sprint or before a release. We use configuration switches to turn reporting on/off.
  2. Each run we create a new run manually by using the “Rerun” button and setting up the run name and milestone. We identify the run by using its generated ID, that ID gets locally stored in config before starting the run. We identify tests by their Test Case ID (which remains the same between runs).

I hope that helps :slight_smile:


#6

Thanks @ozaidenvorm_tal. Point 1 is exactly my dilemma, for CI not everyone wants to know the results for every day, just at a key milestones e.g. End Of Sprint, pre-release etc. Thanks for the insight


#7

@ghawkes I save every run from every CI and regression build from Jenkins into TestRail. This way if a database engineer changes something that breaks the tests, we know when it happened and can trace back to the appropriate people. We can also tell how long a test not been passing to the exact hour for various reasons. I do something similar to what @bschwen has, though our TESTRAIL_USER environment variable is the toggle (when the variable is set, results are sent to TestRail, when unset, results are not) It is just a reduction in the number of variables we have to manage. In our case, the Jenkins build executes a shell command that creates a run, and saves the resulting run id to a temp file right before Selenium or other test process picks up the run id and uses to send results.


#8

A solution might be to have a couple redundant suites or projects, a CI project and a release project. Our test framework automatically generates the test cases and sections if they do not already exist so this approach may or may not fit your situation.


#9

Thanks @Edo this is great stuff. Like I said what I have right now I’ve implemented as a phase 1 that requires some manual intervention and is a little rigid. I want to build something more sophisticated and flexible in the next phase, so information like this is perfect for me at the stage I’m at. :+1:


#10

In our case, the Jenkins build executes a shell command that creates a run, and saves the resulting run id to a temp file right before Selenium or other test process picks up the run id and uses to send results.

Thank you @Edo. This is exactly what I was looking for. I’m using Artillery.io, which saves test results in JSON and HTML, so we can’t use the Jenkins TestRail plugin like I’ve used before when getting TestNG results.

-E