Thanks Tobias, I should’ve given an example to go along with my question. Some of our tests are a single test which comprise of multiple iterations. An example would be power cycle, this test requires we power down a device, verify that the device powers down as expected, wait a period of time and return power to the device and verify the device powers up as expected, which would be repeated for 100 iterations. The 100 iterations might be executed on a single release, multiple releases or spread across multiple releases. If the 100 iteration attempts were spread across 4 releases with 25 iteration attempts for each release. When the tests were executed the test failed on the 5th iteration on the 1st release, 10th iteration on the 2nd release, 18th iteration on the 3rd release and completed all 25 iterations on the final release. We would then be able to look at the reliability of the power cycle function and how it improved or regressed through the various releases. 1st release reliability metric = 20% , 2nd release reliability metric = 40 %, 3rd release reliability metric = 72% and 4th release reliability metric = 100%. Our requirement is firstly to be able to record iteration counts for each release and to be able to easily pull the iteration counts for daily status and test exit reports.