We have a specific set of test runs that point to a set of suites. These tests are executed via automation and the metadata (TestCase defs) and results are uploaded via the api. If it helps one of the runs is our “DailyRun” and this points to Suite1 and Suite2.
We wanted a good historical view of how these tests perform over time so the current approach is to write results to the same Run every time the test is executed.
The challenge: as tests change over time (added/removed) how best to handle those that have been removed?
[Our sync mechanism currently does NOT remove tests (a removed test removes all results for that test). The byproduct then is that when a test is no longer available in the automation (source) it is effectively orphaned in the testrun and starts polluting the results of the run with a bunch of “Not run” tests.]
Until now I have been taking the approach of periodically archiving the run and removing the ‘orphans’ but this is not without some pain. 1) manual and therefore prone to human error 2) If I want to see full history I have to look at the “active” run as well as those that were archived.
Curious if anyone has encountered something similar/found a clever solution/am willing to consider that we need to change our approach.