Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Multiple regression test runs from same test suite and custom fields


Created multiple regression test runs from the same test suite and assigned a test run to each tester. Also, added custom fields e.g. Test Build Number, Operating System and Machine Details.

Tester A executes test run entered e.g. Test Build Number, Operating System and Machine Details.

Tester B executes test run but was able to overwrite all data that Tester A entered e.g. Test Build Number, Operating System and Machine Details Fields.

I thought the test suite is where you store all your base test cases, which you can reference and create multiple regression test runs. Each test run should allow a tester to input his/her own test data.

Please confirm if this is a defect that needs to be fixed ASAP.


I think that what you want to do is do a Test Plan - one for each tester. Within each plan is a separate run - I think… :slight_smile:


Thanks for your response. Unfortunately it didn’t work. I think there is an issue on how the test suite is configured. It seems that you cannot edit custom fields during a test run. For example, if you have a custom field to enter Machine Details, you have no choice but to edit the test case. Consequently, this becomes a permanent entry to a test case in a test suite and if another tester is using the same test suite referenced from another test run, he/she can literally overwrite the existing test data. Major issue/flaw with TestRail. Please fix.


Sounds more like the data you want to save goes with the Test Run/Plan rather than the test case itself. Assuming the ‘Machine Details’ is the details on the machine the test was run on. If tracked via a test case you would have to have a test case for every Machine Detail.

I know when adding the Test Results you can enter version details. With test plans Configurations can be used (but not with test runs). Configurations are created via customer admin.

Tobias will likely chime in tonight or tomorrow but if I understand the explanation - I think this is the answer based on what I have learned via my companies local install of TR and how we use it.


These entries were made by Tester 1. He created a test run using the base test suite of test cases.

Tester 2 creates his own test run using the same base test suite of test cases.

When accessing the test cases from his own test run generated from the base test suite, these custom fields appear pre-populated for Tester 2. These custom fields should be blank for Tester 2.


Looks like your custom fields are defined on the test “Case” which is indeed shared across all test runs. Instead, I think you should be defining them (and setting them) on your test “results” (


Thanks, that’s not going to work for me either. Now if those top section custom field types are available for the bottom section, that would work for me. But for now, it looks really ugly and unprofessional to have a huge textbox just for the Build Number et al.


Hello Crispin,

Thanks for your posting. If you want to enter those values when adding test runs, you would need to add the custom fields on the test result level (Administration > Customizations > Test Result Fields). That said, there’s already a built-in field for entering the build number (Version) on the Add Test Result dialog. You can also look into using the Milestone field on the test run/plan level to manage versions or releases.

For operating systems, we recommend using test plans and configurations. You can learn more about configurations with the following training video:

Machine details can also be covered via configurations or a custom field. Custom fields can use different types (string, dropdown, multi-select, text and more) and you can learn more about custom fields here:

We are also happy to help in case you have any questions.



You can add normal text boxes to your results. Here is one we added for browser version for our UI tests:

and here is the customization for the test result field:


Hello Ryan,

Thanks for your posting. We usually recommend using configurations to test against different browsers and versions but a custom field can also be used for this of course:

Using the configurations has the advantage that you can easily compare test runs for different browsers and/or versions using various reports on the Reports tab, e.g. the Results > Comparison for Cases report:

Another example (with configurations):

Thanks again,



We do use configuration for the browser type, but with the rapid deployments of Chrome and FF we have a general IE9, IE10, Firefox, Chrome, Safari, etc configuration and put the actual version number in the result.



It can still make sense to use configurations for this (for better reporting) but I agree that using a generic browser configuration is usually sufficient (especially as the browsers have such a short release cycle nowadays as you mentioned).