Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Do your functional tests become regression tests?


Hi all,

This isn’t specifically a TestRail question, more of a general process question… but it’s relevant to how I will ultimately use TestRail:

Say you have Product A (version 1.0), and you’ve created 200 tests for it. Since the product already existed, those 200 tests are presumably regression tests, because we could run them over and over any time new features are added to the product, to ensure the old features still work right.

So… let’s say a few new features are added to the product for version 1.1… and we create 100 new system test cases for that release, in the same suite as our 200 existing regression test cases. Now we have 300 test cases.

When the time comes to test… we create a TestRail test plan which has several configurations in it (multiple OS/browsers, etc.)… some for the 100 new system tests… some for the 200 regression tests. We use the test type field to differentiate between the regression & system tests so we can select them into the correct run.

All tests pass, and version 1.1 goes to production. Do we change the test type of those 100 new system tests we wrote from system to regression now?

Say the business wants to add more features for version 1.2, and you write 100 new system test cases for those new features.

For the version 1.2 test run… will we be executing 100 system tests plus 300 regression tests (200 from v1.0 + 100 from v1.1)?

Do we always keep converting our system tests into regression tests whenever the features are deployed to production? If so… what kind of heuristic do you use to select which regression test cases to run, since the regression pool keeps expanding with every new release?


From my point of view not all system tests at one point of time become per se a regression test. So its not a blind copy after a release.

If part of a software has changed, then the older tests for this part/module/component (how ever you call it) become part of the regression tests.

“Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes” ( So whenever a change in a new release affects old code, tests regarding this, will be regression tests.

One of the testing principle is it that you cannot test exhaustively. So with the increasing number of tests you will never be able to fully test your system nor conduct every test case in your repository.

Its the tedious, time consummating part of testing efforts to identify the correct/significant subset of your test repo to cover the right amount of system and regression tests.

Ideally most of regression testing can be done automatically, then the number normally is not of an issue .

My 2 cents


Thanks. That fits my expectations. So the next question becomes… where specifically do you create your system tests, relative to your regression tests?

The TestRail developers encourage us to put new system tests into the same suite that contains the app’s regression tests, which absolutely makes sense in some ways (i.e. they are all targeting the same app, so why not have them in the same suite, differentiate by some kind of custom tag and/or sections).

Since you mentioned blind copy, however, I’m wondering whether you create your system tests in a separate suite, then copy a subset into the main regression pool once system testing is done? My fear with that approach is that we lose whatever execution duration data has been recorded, which is helpful for forecasting future runs.

Furthermore… do you typically delete a percentage of your system tests once that testing phase is complete… or do you just keep accumulating them somewhere alongside your regression tests in case you need them again?


I have to say that I don’t have worked with TestRail long enough to answer this question.My post was more of a theoretically idea.

How exactly this can be accomplished best in TestRail is something I also need to find out.


Let’s say you created a new mobile app. I would creates cases for functional, automation, smoke, etc. The app is a v1.0. During several iterations, v1.1 and on, simply add additional cases and modify existing. At this point nothing is regression.

Now, a major modification occurs, v2.0. I would flag all functional cases as regression along with some others. This represents features that should continue to work in the new app. Any new features modifying old code, should have two cases; the regression focusing on what hasn’t changed and a new case with new feature content. I tend to create cases as small as possible and stay clear of cases with many steps.


For me, a test case is a test case. It tests part of the product. You can use a folder tree structure to separate ‘system’ and ‘regression’ if you’d like, ex: we have a folder for ‘ongoing feature development’ that we use for tests that aren’t quite in the default branch yet…

But in general, if its going in the release version (whatever the default branch is) it goes in the suite, system or not. Then we use the priority field (1, 2, 3) to define its importance of regression. We use the type field to define what type of test it is, ex: functional ,security, performance… we don’t list regression here.

We never delete test cases, only if they are no longer relevant to the code base (ex: feature removed). All tests are important and should be kept.

Then when we do our normal work, we’d grab the feature/system tests because everything is separated nicely in folder trees and if we want to regress an area (because we cant regress everything), we’d grab the P1 tests for the area that we want to regress. If a lot of work was done in that area, we’d also grab the P2 ones. If the entire area was completely touched and vulnerable… only then would we grab the P3’s