Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Completed Dates on the Test Runs & Results panel


On the Test Runs & Results page, we have discrepancies between the chronological order of tests and dates. Please consider providing option to specify runs & plans by last recorded activity date instead of Completed date. Here is the example:

Run A: Friday evening automated tests fail (74%, kept active until QA inspection/annotation)
Run B: Saturday evening automated tests fail (84%, kept active until QA inspection/annotation)
Run C: Sunday morning automated tests pass (100% and Plan is automatically closed)

On Monday morning QA Engineer early bird annotates failed test results with reasons/causes and closes Run B & A.

All Runs & Plans are now marked Completed., and the Test Runs & Results page now displays this:
Sunday 100%
Monday 84%
Monday 74%

At 8:15 an IT Director opens a link to the Runs & Results Panel assumes something bad just started happening and assumes tests were healthy until Monday morning. They are mislead elsewhere without drilling to see the activity dates or reading any comments on the results. The easiest solution I can see in providing a better product is either provide:

  1. option to close run/plan based on current time, or by last status result modified/recorded (Pass/Fail/Blocked/Rerun).
  2. option to change project level display settings to show the page with activity date instead of completed date.


Other potential solution would be to provide an option so users can revise comments or add information to custom fields in test results even after the plan or run is closed. QA teams may want the ability to revise labels on historical records to improve tracking capabilities on trends related to failures. Currently when you close the plan it sets the timeframe when the run was completed and freezes the status of the tests - This is desired. But it unfortunately it goes a step too far for us in locking you out of making improvements to historical data and lost opportunity to provide new statistics surrounding past failures.


Hello Matus,

Thanks for your feedback on this. I understand that the behavior may not be optimal in this case and we will think about ways to make this more flexible. For example, for the latest version (3.1), we added a few options for changing the grouping/ordering for active test runs/plans and I believe something similar would also make sense for completed test runs. I like the idea of using the last activity for the ordering, for example.

We will make sure to consider this for a future version and I’ve just added it to our feature request list, thanks!