Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Enhancement : Edit test result (add attachment)


#1

Hi,

we realized that when we add a test result, we can add attachments, pictures, etc.

But if we want to edit the test results, we cannot do those actions.

Would it be possible to fix that in an upcoming release ?

Thanks in advance,

Louis


#2

Hi Louis,

Thanks for the feedback. We will look into this and consider supporting adding new attachments to existing test results. The main intention behind editing test results is to fix smaller errors in the description or other fields you’ve entered, so we don’t currently support changing the attachments or modifying the actual result status. But it’s something we will consider for a future update. For now you could simply add another test result to “override” or amend the previous result.

Thanks,
Dennis


#3

I understand.

The main point why we would like to add other attachments would be if for example you forgot to add a screenshot when adding the test result. You would like to add it after, etc.


#4

Hello Louis,

You can also use the Add Comment or File button in the sidebar to add additional files or comments to a test, even without having to add another actual test result.

I hope this helps.

Regards,
Dennis


#5

I am also interested in this capability. When our automated tests run, they record test status in TestRail. When we get to the office in the morning, we go through all the test failures, triage them, and file bugs for them. I want my team to be able to edit the test result to add in the bug associated with the failure. I know I can add a comment, but I really want the bug associated with the test failure. I know I can add another test result, but don’t want it to look the test failed twice.


#6

Hi Bill,

Thanks for your feedback! You can edit your own test results (or if the test results were added by a task with your own user account) and add additional details (such as pushing/entering defects). The Edit option is only available for 24 hours by default but you can increase or disable this limit under Administration > Site Settings > User Interface. If the results are added by a different user account, you can always add a new result with the same status and this wouldn’t change the overall statistics.

Cheers,
Tobias


#7

Tobias, It does not look like TR is set up to do automated testing like I am used to. Here is the use case I would like to employ using TestRail. Please look it over and see if there is an easy way to do this:

  1. Our automation initiates the execution of a set of tests.
  2. As each test completes, a test result is stored in TestRail.
  3. When tests fail, a set of artifacts is stored in a triage directory. This include dumps, logs. screenshots, etc.
  4. An engineer comes in and looks through the artifacts for each failed test. They then decide if a failure is because of an existing problem (duplicate) or a new problem, and files a new bug.
  5. Then, they post the bug number associated with the failed test.

So, the tests are initiated through our automation, but we have a team of three people who triage failures and create bugs for reporting problems back to development.

Thanks,
Bill


#8

Hi Bill,

Thanks for the feedback. I would recommend simply adding a test result to the tests when a tester confirmed there’s an issue, or e.g. set the test to passed if it’s not really a problem. This approach would work much better than trying to change a previous result, as you will have a very transparent history of everything that happened. I.e. you would always be able to see the original failed test submitted by the automated test and you would then see the additional comment added by the triage team. If you were to directly modify a previous result, all this information would be lost and you wouldn’t be able to see who made which changes on which day etc… This approach has many advantages and practically no disadvantages and we would recommend building a full/transparent history to make it easier to review this.


#9

Here are the disadvantages to your proposal:

  • If we create a new failure when we enter the bug information, it looks like the test was run twice when it was only actually run once. It also looks like the person who did the triage also ran the test when they did not. They only looked at the result. This will mess up the test history, as each test will have two failed records for each test failure.
  • If we delay entering the results until they have been reviewed and failures triaged, we get the accounting we want, (don’t have the issues raised above) but do not get the real-time accounting of how the testing is going as it progresses.
  • If we use a comment to document the connection between the test failure and bug, then I do not get the bug associated with the test run in the reporting and test history which I like.

I see your point about the accounting and audit trail, and I would certainly like that too. In my case since the test is executed and results reported by the automation software, knowing “who” entered the information is not relevant. When the test is automated the automation enters the information so there is no person doing it. If the bug number is present for an automated test, changing that last modified field to the person who modified it would be fine. I suppose I would really like a history of who changed the results, so that when the triage person adds the bug information we would know who does that, but if that is too hard, I think having the ability to add the bug to an existing test failure is more important to me.

Looking through the API, if we could update a test result, we could write a script to update the test result using the same user ID that created the result. Today, it does not look like this functionality is available in the API. You can update a test result (if it is yours) through the GUI, but we don’t see this capability in the API. If you were to add that, I think we could get what we want.


#10

Hi there,

Thanks for the feedback. For the test run status statistics, only the latest result is used, so the test would only be counted as failed once. Only for the activity statistics each result would be counted, but that’s expected of the activity report, as it only shows recent activity. So I still think the suggested approach would work much better and would be the cleaner option. But we are also happy to review making it easier to add details to previous results.


#11

Sorry, I might not have been clear. It is not the run status statistics, but the run history that gets misrepresented. Each failure looks like the test was run twice. The suggested approach does not satisfy the way I would like our testing data to be represented. If you would consider making it easier to add details, or even support the same ability to update a test result using the same account through the API that is available through the GUI, we could build the functionality I am looking for.

For what it is worth, I asked a couple long time TestRail users here if I was missing something in the product that would allow me to update the bug information through the API later, and they said that sadly this was something missing from the product. If you would consider adding it, I think there might be others that would appreciate it also.

Thanks for your time.


#12

Thanks for the additional details, Bill! The behavior of being able to edit your own results only is a security feature but I understand that a bit more flexibility would be great to have in this case. It might make sense to add a separate permission for this so admins would have control over this feature and could also customize the behavior. We will make sure to look into this and I agree that others would benefit from this as well. Thanks again for your feedback!

Cheers,
Tobias


#13

Thanks for considering this enhancement Tobias!


#14

Did you fix this? Im kinda in the need of this as i have to select what of the bugs reported that should be pushed to our issuetracker.


#15

Hi,

I join the request for the feature. Our use case is exactly as described by Billgo and is the major blocker is being able to full utilize Testrail in our setup.

Thanks,
Ludovico