Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.

Do your functional tests become regression tests?


Hi all,

This isn’t specifically a TestRail question, more of a general process question… but it’s relevant to how I will ultimately use TestRail:

Say you have Product A (version 1.0), and you’ve created 200 tests for it. Since the product already existed, those 200 tests are presumably regression tests, because we could run them over and over any time new features are added to the product, to ensure the old features still work right.

So… let’s say a few new features are added to the product for version 1.1… and we create 100 new system test cases for that release, in the same suite as our 200 existing regression test cases. Now we have 300 test cases.

When the time comes to test… we create a TestRail test plan which has several configurations in it (multiple OS/browsers, etc.)… some for the 100 new system tests… some for the 200 regression tests. We use the test type field to differentiate between the regression & system tests so we can select them into the correct run.

All tests pass, and version 1.1 goes to production. Do we change the test type of those 100 new system tests we wrote from system to regression now?

Say the business wants to add more features for version 1.2, and you write 100 new system test cases for those new features.

For the version 1.2 test run… will we be executing 100 system tests plus 300 regression tests (200 from v1.0 + 100 from v1.1)?

Do we always keep converting our system tests into regression tests whenever the features are deployed to production? If so… what kind of heuristic do you use to select which regression test cases to run, since the regression pool keeps expanding with every new release?


From my point of view not all system tests at one point of time become per se a regression test. So its not a blind copy after a release.

If part of a software has changed, then the older tests for this part/module/component (how ever you call it) become part of the regression tests.

“Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes” ( So whenever a change in a new release affects old code, tests regarding this, will be regression tests.

One of the testing principle is it that you cannot test exhaustively. So with the increasing number of tests you will never be able to fully test your system nor conduct every test case in your repository.

Its the tedious, time consummating part of testing efforts to identify the correct/significant subset of your test repo to cover the right amount of system and regression tests.

Ideally most of regression testing can be done automatically, then the number normally is not of an issue .

My 2 cents


Thanks. That fits my expectations. So the next question becomes… where specifically do you create your system tests, relative to your regression tests?

The TestRail developers encourage us to put new system tests into the same suite that contains the app’s regression tests, which absolutely makes sense in some ways (i.e. they are all targeting the same app, so why not have them in the same suite, differentiate by some kind of custom tag and/or sections).

Since you mentioned blind copy, however, I’m wondering whether you create your system tests in a separate suite, then copy a subset into the main regression pool once system testing is done? My fear with that approach is that we lose whatever execution duration data has been recorded, which is helpful for forecasting future runs.

Furthermore… do you typically delete a percentage of your system tests once that testing phase is complete… or do you just keep accumulating them somewhere alongside your regression tests in case you need them again?


I have to say that I don’t have worked with TestRail long enough to answer this question.My post was more of a theoretically idea.

How exactly this can be accomplished best in TestRail is something I also need to find out.


Let’s say you created a new mobile app. I would creates cases for functional, automation, smoke, etc. The app is a v1.0. During several iterations, v1.1 and on, simply add additional cases and modify existing. At this point nothing is regression.

Now, a major modification occurs, v2.0. I would flag all functional cases as regression along with some others. This represents features that should continue to work in the new app. Any new features modifying old code, should have two cases; the regression focusing on what hasn’t changed and a new case with new feature content. I tend to create cases as small as possible and stay clear of cases with many steps.


For me, a test case is a test case. It tests part of the product. You can use a folder tree structure to separate ‘system’ and ‘regression’ if you’d like, ex: we have a folder for ‘ongoing feature development’ that we use for tests that aren’t quite in the default branch yet…

But in general, if its going in the release version (whatever the default branch is) it goes in the suite, system or not. Then we use the priority field (1, 2, 3) to define its importance of regression. We use the type field to define what type of test it is, ex: functional ,security, performance… we don’t list regression here.

We never delete test cases, only if they are no longer relevant to the code base (ex: feature removed). All tests are important and should be kept.

Then when we do our normal work, we’d grab the feature/system tests because everything is separated nicely in folder trees and if we want to regress an area (because we cant regress everything), we’d grab the P1 tests for the area that we want to regress. If a lot of work was done in that area, we’d also grab the P2 ones. If the entire area was completely touched and vulnerable… only then would we grab the P3’s


Thanks for the reply. Are you saying that you generally use sections & sub-sections to group your tests according to different feature areas in an app (i.e. Login, User Profile, Feature C, Feature D)?


Yes, its worked for us… you need to group your test cases in some manner… automation/dev would obviously prefer you match their code but for us it made more sense to do more of a map… so we’d have the following structure:

Library > Component > Features > Sub Features

ex: Library > Manager > Login > SSO
Library > Backend > Logging > Audit


Then we’d have other folders for ‘Ongoing Feature Development’ where we’d have sections for features in there that aren’t quite ready for the library yet… think of it like a sandbox for qa tests… then when they are tested/released, we move them up

We also have a folder for Trash because we removed the delete test case rights and ask people to just move them into trash then we empty it after each release (if someone deletes something accidentally, basically you need to do a db backup, … pain point)

We also introduced a field recently called ‘component’… if you use jira you likely know what this is. We don’t go too detailed for component, but instead try to have a high level overview… ex: Authentication… which groups login, logout, sso, etc… then we make the field required and we use a multi select field so you can use multiple ones. We have this because while we try to group all features together, we didnt have a way to go 'ok, i want to test the entire Reporting framework… because we had a section for Reporting, but it contained generic tests, then each feature might have their own reporting test in the feature folder… So using component i can go 'give me all tests with component = reporting and get what i want

hope that helps


Appreciate the add’l details. It’s similar to what we’re doing, except that I don’t have a top level folder labeled “Library”. We mostly just group by Features > Sub-Features. Component is reflected by a component field like you mentioned later. Our TestRail component items ideally match the Jira components for that product. That way if a particular development effort “touches” a given component (identified in the Jira issue)… we can filter our tests by that (TestRail) component to identify relevant regression tests.

For us… each “Suite” (a TestRail term) represents a different product that must be tested. Presumably, products will all be different enough that they would all require different test cases.

Within those respective Suites, we have created folders (Sub-Sections) that represent different feature areas of the respective app. This is where I think we both deviate from Gurock’s recommendations to NOT use folders (Sections & Sub-Sections) to group tests. We should instead be using a field (like components, etc.) that can be filtered when it comes time to filter & add tests to test runs/plans.

I still feel like having the visual tree of tests provides valuable peripheral information. It’s easier (especially for a new hire) to get a “mental model” of what kind/volume of features an app has, and how they relate to one another just by skimming through the tree hierarchy.

Regardless… in my original query I mentioned we are using the “Type” field (I think this is a stock TestRail field?) to denote what kind of test it is (system vs. regression) because that allows us to see add’l color in reports (i.e. were the majority of the current test run failures from regression tests, or the new system tests?). That would be useful if both types of tests were being mixed within the 1 “library”, or feature Sub-Sections in my case.

What we ended up doing is creating add’l Sections to hold the new tests for each new release (similar to your “Ongoing Feature Development”). We further organize those new “System” tests into Sub-Sections that have the associated Jira Issue number in the Sub-Section title for easy visual traceability.

Once a release is complete & deployed… the idea is to then move those new tests from the release area into the most appropriate feature sub-sections mentioned above. At this time… their “Type” field would be toggled from “System” to “Regression” as well.


Sounds like we’re similar, which is good :slight_smile:

I don’t think we’re really going against groups. I think they are necessary. Testrail has a shortcoming when it comes to grouping features together. What I admire from other test tools is they can group features together into suites of tests, then you can say ‘ok these are the valid configurations for this suite’ and then execute against one or more of those config combinations. I hope testrail will eventually get to that granularity. For now… configs are not enforced, configs are not tied to test cases, only sections (+ components) can be used to group feature tests together, & suites are intended for separate products. The workarounds we use (sections, config best practices, components) work for now, but it would be nice to not have to do them…

As for switching type, up to you if that’s what you want to do. I’ll reiterate what I posted before, it’ll be the same response but we’ve done some changes since…

For us we have types: manual & automated, then sub type (custom dropdown) Functional, security, ux, performance, integration. The reason we have sub types is because we have about 50 other projects using testrail, we need to use lower-level custom fields for customization and make system fields generic.

So we label the test typically as functional and then we use priority to determine whether or not its intended to be regressed. We denote that a high priority must mean that it must be regressed every release. Medium would be ‘if the feature was touched’, low would be 'only if necessary (ex: full feature regression)