We’ve recently moved to TestRail and with the first phase of the migration being get to migrate the test cases (manual and automated) and agree the test process. Were now looking at integrating the automation. Currently, each project has the following basic principles:
- Milestones for each release
- Test Runs for each sprint with each run incorporating some regression (automated and manual)
- 1 Milestone Regression test run to incorporate a focused pre-release regression run.
- We then create a milestone test report to formally close the milestone and test runs.
I’m now starting to look into is integrating the automated tests from our selenium frameworks.
We have CI automation (currently these results are not being recorded yet in test rail) and in some cases on demand automated tests that are executed that we are recording results manually.
Before I start on implementing an integration, I want to understand a things from a best practice process and reporting view.
When adding the integration to your test code, whats the best way to identify the run id? Do you have to somehow change an internal configuration setting so that you update a runid that the tests read, do you have a standalone Test Run in test rail that is always used to record the results or do you create a run using the API?
For CI I’m particularly interested in reporting practices. The CI runs are against builds not sprints and of course they run everyday. I can’t currently marry up in my head daily results coming into TestRail and reporting the results against a sprint or milestone. How should CI test runs normally be reported in Test Rail? Do they have a standalone test run where results are recorded and report on the run separately or is as described above, where you update a config file with the new run id that the framework reads so that you incorporate the runs into a sprint at a given point in time?
Any insights into the above would be greatly appreciated