Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Test Automation Integration Options

#1

We’ve recently moved to TestRail and with the first phase of the migration being get to migrate the test cases (manual and automated) and agree the test process. Were now looking at integrating the automation. Currently, each project has the following basic principles:

  • Milestones for each release
  • Test Runs for each sprint with each run incorporating some regression (automated and manual)
  • 1 Milestone Regression test run to incorporate a focused pre-release regression run.
  • We then create a milestone test report to formally close the milestone and test runs.

I’m now starting to look into is integrating the automated tests from our selenium frameworks.

We have CI automation (currently these results are not being recorded yet in test rail) and in some cases on demand automated tests that are executed that we are recording results manually.

Before I start on implementing an integration, I want to understand a things from a best practice process and reporting view.

  1. When adding the integration to your test code, whats the best way to identify the run id? Do you have to somehow change an internal configuration setting so that you update a runid that the tests read, do you have a standalone Test Run in test rail that is always used to record the results or do you create a run using the API?

  2. For CI I’m particularly interested in reporting practices. The CI runs are against builds not sprints and of course they run everyday. I can’t currently marry up in my head daily results coming into TestRail and reporting the results against a sprint or milestone. How should CI test runs normally be reported in Test Rail? Do they have a standalone test run where results are recorded and report on the run separately or is as described above, where you update a config file with the new run id that the framework reads so that you incorporate the runs into a sprint at a given point in time?

Any insights into the above would be greatly appreciated

#2

It depends on your code - I use python and I just set globals to pass data about the Test Run around. I use things like TestRun.testrail_toggle = True when I am creating a test run in my code. I have all tests loaded through a single file that is called by our CI/CD stuff (devops) - they pass in a few arguements to that file, and that file ‘enables’ test rail, sets a few other globals, then loads all tests based on the test_type argument they pass in. This way I have complete control of what tests run, where. And they just have to worry about passing in a few arguments to a script. All of my tests inherit from a base test file. In the base test file, I have a ‘cleanUp()’ method that is called after every test method… I have a line that says if TestRun.testrail_toggle = True: set the result for the test case. I created custom decorators in my python code to assign the test_case_id attribute to my test methods i.e. @test_case_id(‘C12345’) would be sitting right above my test method that is automating whatever the C12345 test case in test rail specifies. This is what gets read when the base test cleanup method gets called to set the results in or add the case to a Test Run.

If you’re interested in my framework, i can show you some examples. I’m planning on breaking it out and putting it in github and pypi soon.

1 Like
#3

@ghawkes check this out!!

1 Like
#4

Thanks @bschwen, I’ll have a look through your code. I’m working in python too and have a very basic integration set-up for now as a first step so I’ll have a look at your code to see if I can adopt some of your principles. Cheers

#5

Hi @ghawkes!
Great questions. I’m not sure there is a best practice that would help answer those questions… I’ll explain how our current set up works to give you a few ideas about a way to go.

  1. Most tests don’t report back results into TestRail. We only report results back when we want wider access to results - i.e. at an end of a sprint or before a release. We use configuration switches to turn reporting on/off.
  2. Each run we create a new run manually by using the “Rerun” button and setting up the run name and milestone. We identify the run by using its generated ID, that ID gets locally stored in config before starting the run. We identify tests by their Test Case ID (which remains the same between runs).

I hope that helps :slight_smile:

1 Like
#6

Thanks @ozaidenvorm_tal. Point 1 is exactly my dilemma, for CI not everyone wants to know the results for every day, just at a key milestones e.g. End Of Sprint, pre-release etc. Thanks for the insight

#7

@ghawkes I save every run from every CI and regression build from Jenkins into TestRail. This way if a database engineer changes something that breaks the tests, we know when it happened and can trace back to the appropriate people. We can also tell how long a test not been passing to the exact hour for various reasons. I do something similar to what @bschwen has, though our TESTRAIL_USER environment variable is the toggle (when the variable is set, results are sent to TestRail, when unset, results are not) It is just a reduction in the number of variables we have to manage. In our case, the Jenkins build executes a shell command that creates a run, and saves the resulting run id to a temp file right before Selenium or other test process picks up the run id and uses to send results.

2 Likes
#8

A solution might be to have a couple redundant suites or projects, a CI project and a release project. Our test framework automatically generates the test cases and sections if they do not already exist so this approach may or may not fit your situation.

1 Like
#9

Thanks @Edo this is great stuff. Like I said what I have right now I’ve implemented as a phase 1 that requires some manual intervention and is a little rigid. I want to build something more sophisticated and flexible in the next phase, so information like this is perfect for me at the stage I’m at. :+1:

#10

In our case, the Jenkins build executes a shell command that creates a run, and saves the resulting run id to a temp file right before Selenium or other test process picks up the run id and uses to send results.

Thank you @Edo. This is exactly what I was looking for. I’m using Artillery.io, which saves test results in JSON and HTML, so we can’t use the Jenkins TestRail plugin like I’ve used before when getting TestNG results.

-E

#11

So do generate those at test run time? I do (well, ‘did’) that as well. This can become complicated if you have manual testers analyzing the results. So, with my framework, you can add @test_case_id(‘C1234’) decorator to your test method. That way, the test case can have any title you’d like and you can keep your test method name to be anything as well. When the setUp() and tearDown() methods run, it’ll grab the test_case_id’s that the framework typed onto your test method - and use this to add the case to the run, as well as use it to report the results to.

Also, with your test run id - if it saves to a temp file, is there some easy way to report test runs from local instead of jenkins?

I love hearing what others have done - especially in relation to python/selenium/testrail/jenkins!

If I could give you gold to describe your entire framework and system - I would!

#12

Let me know if you have anything you run into that you either cannot figure out - or that you have figured out! My company is 100% growing into managing all the tests from our python and testrail, so, i’ve got most of what we’ve needed already done. But if someone is doing it different with python/testrail - I’d LOVE to hear about it, as well as share my knowledge on the subjects.

1 Like
#13

Thanks for sharing @bschwen! I set up a custom Test Case field in TestRail called external_id. My TestRail initializer calls get_cases at the very beginning before any tests are run. Right after the scenario executes, the library checks to see if an external id exists with the test’s ClassName#method_name. If there is no match, the section is automatically created (if necessary) using add_section, followed by it hitting add_case while populating the external_id. After that, the case is added to the run, and finally a new entry is added to a results array containing the result code, case id, and stack trace (if failure). As had been fried trying to populate individual results in TestRail in real time due rate limits, we wait til the very end of the run / teardown to hit the add_results_for_cases with a bulk push at the end.

1 Like
#14

Gotcha! I like what you did with the test external_id - and setting up the sections to mimic the class name is pretty awesome! As, it’s hard to follow a convention! I update results in realtime, because I have a parent class that is inherited in all test classes that has a tearDown() that looks for the toggle_test_rail boolean and if True, it’ll attempt to report the results to testrail. Of course, I only set that to True, when I’ve created the test_run. (which is only when it’s running from Jenkins)